I wonder if dynamic programming and greedy algorithms solve the same type of problems, either accurately or approximately? Specifically,
- As far as I know, the type of problems that dynamic programming can solve are those that have "optimal structure". Can the same type of problems always be solved by greedy algorithms, either accurately or approximately? (I think yes)
- Are there optimal problems that do not have "optimal structure" and can be solved by greedy algorithms, either accurately or approximately? If yes, what characterize the type of problems that can be solved by greedy algorithms, either accurately or approximately?
Thanks and regards!

("Approximately" is hard to define, so I'm only going to address the "accurately" or "optimally" aspect of your questions.)
There's a nice discussion of the difference between greedy algorithms and dynamic programming in Introduction to Algorithms, by Cormen, Leiserson, Rivest, and Stein (Chapter 16, pages 381-383 in the second edition).
With respect to your first question, here's a summary of what they have to say. Both dynamic programming and greedy algorithms can be used on problems that exhibit "optimal substructure" (which CLRS defines by saying that an optimal solution to the problem contains within it optimal solutions to subproblems). However, in order for the greedy solution to be optimal, the problem must also exhibit what they call the "greedy-choice property"; i.e., a globally optimal solution can be arrived at by making locally optimal (greedy) choices. In contrast, dynamic programming is good for problems that exhibit not only optimal substructure but also overlapping subproblems. (This means that a particular subproblem can be reached in multiple ways. The dynamic programming algorithm calculates the value of each subproblem once and then can reuse these every time the algorithm revisits them.)
They then give the example of the 0-1 knapsack vs. the fractional knapsack problem here. Both exhibit the optimal substructure property, but only the second also exhibits the greedy-choice property. Thus the second one can be solved to optimality with a greedy algorithm (or a dynamic programming algorithm, although greedy would be faster), but the first one requires dynamic programming or some other non-greedy approach. See also Henry's example in the other answer.
It may also be helpful to read the following, which is the heart of their discussion of the difference and which I quote in full (emphasis mine):
Basically, then, dynamic programming solves subproblems first and then uses the solutions to subproblems to construct solutions to larger problems. Greedy algorithms take on the entire larger problem first, and each greedy choice reduces the larger problem to a smaller subproblem. Thus the two kinds of algorithms are sort of inverses of each other.
With respect to your second question, here's another quote from CLRS (p. 380):