Is a function, in part, defined by it's domain?

249 Views Asked by At

I have read the definition of a function from two sources.Both sources state that a function defines a relationship between the input and the output.

However, the first source states that it is the domain and process that defines the function:

''In fact the Domain is an essential part of the function. Change the Domain and we have a different function. ''

I think the logic there is that, since functions maps only one value in the domain to any given value in range, changing the domain thus changes the values of the second set (the range). Since the two sets change, so to does the function.

The latter states that it's only the process. It doesn't explicitly say that it's not defined by the domain, but it doesn't state that it is neither.

First source: http://www.mathsisfun.com/sets/domain-range-codomain.html

Second source: https://www.khanacademy.org/math/algebra/algebra-functions/intro-to-functions/v/what-is-a-function

Personally I think the latter is correct, since if a function is written like f(x), then this must (in my mind) mean that the function isn't defined by the domain, just it's process, since if the domain for f(x) changes, f(x) ≠ f(x), which seems silly.

1

There are 1 best solutions below

4
On

It all depends on what you call a function, but the definition that is universally used by mathematicians today (excluding some logicians perhaps, and people who work on fundations of mathematics) is that a function is defined as a triple ($X,Y,\Gamma)$ where $X$ is the domain, $Y$ is the range, and $\Gamma\subset X\times Y$ is the graph.

So it is a fact that if you change the domain, you change the function. Because the domain is part of the data.

This being said, mathematicians commonly identify functions that only differ by their domain, and most of the time it is harmless.


As @MattSamuel points out, part of your mistake stems from the fact that you see functions as "processes".

This is certainly a valid point of view in many aspects, but it is not what mathematicians call functions. There have been many attempts to build a theory of "processes", ie computable functions, and they all amount to the same thing : functions computable by a Turing machine (aka "recursive functions"). Actually the famous Church Thesis states that this is the ultimate reasonable definition of computability (but this is arguably subject to debate).

First note that you most likely won't be able to define functions $\mathbb{R}\to \mathbb{R}$ as "processes" because you would nedd some kind of effective representation of real numbers. I guess you could make up a model for this kind of computability but it won't correspond to anything in the "real world".

But more importantly, it can be shown very rigourously that some (actually lots) of functions $\mathbb{N}\to \mathbb{N}$ cannot be computed in any reasonable way (at least not by a Turing machine).

As a first argument, there are uncountably many functions $\mathbb{N}\to \mathbb{N}$, but only countably many Turing machines, so not all functions can be computed by such a machine. In a more ordinary language : there are only countably many possible programs, say in C, because they are given by a finite sequence of symbols (the script of the program). So there are infinitely more functions than there are programs.

But we can also explicitly define functions that we can prove cannot be computed. The most common (and convincing) examples involve halting problems, but this requires some effort to be able to define it, so I refer you to https://en.wikipedia.org/wiki/Halting_problem.

As an example that is easier to define : there is an explicit polynomial $P\in \mathbb{Z}[x_0,x_1,\dots,x_r]$ (though I don't know what it looks like, you can probably find an example on the internet) such that the function $f: \mathbb{N}\to \{0;1\}$ defined by "$f(n) = 1$ if and only if $P(n,x_1,\dots,x_r)=0$ as a solution in $\mathbb{Z}^r$" (which is perfectly defined) cannot be computed by a Turing machine.

As to why such a thing is possible : of course when $P(n,x_1,\dots,x_r)=0$ has a solution, your machin will be able to find it eventually (just test all possible values until you find a solution), but when there is no solution your machine will just loop indefinitely and never return anything. And this is not a problem of not finding the right algorithm : there is no algorithm. This is a theorem.