Domanda

I learnt gradient descent through online resources (namely machine learning at coursera). However the information provided only said to repeat gradient descent until it converges.

Their definition of convergence was to use a graph of the cost function relative to the number of iterations and watch when the graph flattens out. Therefore I assume that I would do the following:

if (change_in_costfunction > precisionvalue) {
          repeat gradient_descent
} 

Alternatively, I was wondering if another way to determine convergence is to watch the coefficient approach it's true value:

if (change_in_coefficient_j > precisionvalue) {
          repeat gradient_descent_for_j
} 
...repeat for all coefficients

So is convergence based on the cost function or the coefficients? And how do we determine the precision value? Should it be a % of the coefficient or total cost function?

È stato utile?

Soluzione

You can imagine how Gradient Descent (GD) works thinking that you throw marble inside a bowl and you start taking photos. The marble will oscillate till friction will stop it in the bottom. Now imaging that you are in an environment that friction is so small that the marble takes a long time to stop completely, so we can assume that when the oscillations are small enough the marble have reach the bottom (although it could continue oscillating). In the following image you could see the first eight steps (photos of the marble) of the GD.

enter image description here

If we continue taking photos the marble makes not appreciable movements, you should zoom the image:

enter image description here

We could keep taking photos and the movements will be more irrelevants.

So reaching a point in which GD makes very small changes in your objective function is called convergence, which doesn't mean it has reached the optimal result (but it is really quite quite near, if not on it).

The precision value can be chosen as the threshold in which you consecutive iterations of GD are almost the same:

grad(i) = 0.0001
grad(i+1) = 0.000099989 <-- grad has changed less than 0.01% => STOP

Altri suggerimenti

I think I understand your question. Based on my understanding, the GD function is based on the cost function. It iterates until the convergence of cost function.

Imagine, plotting a graph of cost function (y-axis) against the number of iterations of GD(x-axis). Now, if the GD works properly the curve is concave up, or decreasing(similar to that of 1/x). Since, the curve is decreasing, the decrease in cost function becomes smaller and smaller, and then there comes a point where the curve is almost flattened. Around that point, we say the GD is more or less converged (again, where the cost function decreases by a unit less than the precision_value).

So, I would your first approach is what you need:

(if(change_in_costFunction > precision_value))

repeat GD;

Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top