Why is the derivative used to represent marginal cost instead of the difference?

1.4k Views Asked by At

Marginal cost is informally defined as "the change in the total cost that arises when the quantity produced is incremented by one unit." And given a total cost function $C(q)$ that's differentiable, the marginal cost is formally the derivative, $C'(q)$. But if I were given $C$ and asked the cost that arises when the quantity produced is increased from 2 to 3, I would simply calculate $C(3)-C(2)$; no need to bring calculus into the picture. In general, $ C(3)-C(2) \neq C'(2)$. For example, if $C(q) = q^2$, then $C(3)-C(2) = 5$, but $C'(2) = 4$.

Thus my question is: Why is the derivative used to represent marginal cost instead of the difference?

Notes:

  1. I thought this question must've been what's being asked here, but evidently not; there what's being asked is (essentially) why $C'(3) \neq C(3)-C(2)$.
  2. I asked this question also on economics.stackexchange, but am not satisfied by the answers given there.
1

There are 1 best solutions below

3
On

The derivative is used to represent the marginal cost because it allows you to apply analytical methods to economics. Some things to note:

  • The functions here are in truth only defined for integer values. I work in the aircraft industry. We do not build or sell 0.734242 of a plane.
  • The marginal cost really is the difference of 1 unit, not the derivative.
  • However, we can extend these functions on integers to all real numbers in the appropriate domains. Though not uniquely: there are many functions on the reals that when restricted to the integers would have the exact same values. For the development of "Business Calculus", we intentionally overlook that and simply assume that we have an appropriate real-valued function provided to us.
  • Just as we extend our functions from integers to real numbers, we also extend the concepts from being discrete to being continuous. Marginal cost is the rate of change of cost with respect to the number of units built. It is the difference of cost over one unit precisely because one unit is the smallest change we can make in the actual cost function. We can also talk about the average marginal cost over a range of units, which the change in cost over the range divided by the size of the range. But if we extend the cost function to a function on all real numbers, we are no longer limited to a minimal change of just one unit. We can take averages over smaller ranges. And the smaller those ranges are, the closer that average value gets to the derivative. In fact the derivative is by definition the limit of the average rate-of-change formula as the size of the range goes to $0$. So for this bogus function on the reals that we are pretending is the actual cost function, it makes sense to define marginal cost as the derivative instead of as a fixed difference of 1 unit.

The real question is, why do we bother with this? Why introduce a bogus function defined on all real numbers, and derivatives and integrals, and such? Because calculus trains you to look at functions differently, and gives you tools that are actually easier than finite difference methods. Calculus trains you to look at increasing vs decreasing, maxima, minima, and inflections, concave vs convex. Try to find the minima of a polynomial function defined only on integers, and it is significantly harder to do than the same problem is for a polynomial function on all real numbers. For theorists, it opens up a whole language of functions originally developed mainly for physics, and provides a whole host of results about them.

Business calculus was invented to allow theorists to more easily develop economic theories. It is taught to non-theorists because it introduces them to that language and points out the things are important in the actual information they will encounter.