Machine learning - Cost function for non linear functions

177 Views Asked by At

The cost function is some indication of the 'cost'/how the predicted value differs from the actual value. In linear regression, this can be measured using MSE. In the case of the logistic function, this is done using the log-likelihood. Is this always the case for nonlinear functions which have the property to map the inputs and weights to a probability?

1

There are 1 best solutions below

0
On BEST ANSWER

All right. Firstly, since you are new to these concepts, I would recommend keeping a good reference book with you. Probably something like Pattern Recognition and Machine Learning by Bishop.

Now the answer to your question is too big to cover entirely, so I point you to the book again. But I'll try to explain it a bit-

Let us take the classification problem(only two classes). If you were given a computer able to solve such things automatically, your first idea would not be to give the loss function, but something like $$L_d(h)=\sum_{(x,y)}\mathbb{I}_{\{h(x)\neq y\}}$$ where $h(.)$ is your program and (x, y) are your specifications. $\mathbb{I}$ is the indicator function here.

(You're basically just enumerating every point in your data and penalising when the program is incorrect). The problem here is we cannot dream of solving by enumerating very large datasets, which we work with usually. So we try to take a few approaches to make the penalty more amenable.

Method 1: Optimisation approach

You need to come up with a loss function over which we can actually optimise $h(.)$. Turns out, we have efficient algorithms to solve this, only when the function h is convex. So any convex loss function upper bounding and approximating $L_d(h)$ would work, assuming h is either linear or convex. (Upper bounding is just a sufficient condition and we can make do without it )

You might have heard of the sigmoid function. It is a very good approximation of $L_d$. But it is not convex. Here the log function can be used to make it convex! (Try it out)

So you can simplify non-convex approximations by a tool called log under some conditions.

Method 2: Probabilistic approach

For any dataset, you assume the function to be taken from a distribution. Now you want your function $h(.)$ to parametrise the distribution best. For example, if you assume a classification process to be a bernoulli trial with probability a function of input, f(x). Your h(x) wants to find/approximate f(x) as we don't know it. Give this, we write the likelihood: $$L({(x_i, y_i)}, h) = \mathbb{P}({(x_i, y_i)}|f=h)$$

where {$(x_i, y_i)$} is the dataset/samples.

And try to maximise it(I won't explain it completely here because it is quite vast and I refer you to the book by Bishop). MLE, or maximum likelihood estimator, wants to maximise the likelihood, i.e., it finds the $argmax_{h}{L(\{(x_i, y_i)|f=h\})}$. So the computer, while optimising the loss, is actually trying to find the MLE.

Why MLE? I'll leave that for you to explore. But in short, it is an unbiased estimator and supposedly has the lowest variance amongst all estimators.

But the thing is finding arg max over the probability function or the log of probability function makes no difference as log is increasing. So log here is used as a tool to simplify the MLE based loss. For eg; in gaussian your f is usually a function of the mean and variance. So h needs to approximate the mean and variance. Taking log removes the exponential part and gives us easy functions(like the L2 norm) to optimise over.

There are many other(like information theory) analogies but the main point is, it should be convex and approximate our intuitive penalty defined earlier.

Also note there are functions that don't use log like the hinge loss, dice loss, etc. And I think I wrote too much oop-