Machine Learning For Dummies
Book image
Explore Book Buy On Amazon
The gradient descent algorithm offers a perfect example of how machine learning works. You can provide it with an intuitive image, not just a mathematical formulation. Moreover, though it is just one of many possible methods, gradient descent is a widely used approach that’s applied to a series of machine learning algorithms such as linear models, neural networks, and gradient boosting machines.

Gradient descent works out a solution by starting from a random solution when given a set of parameters (a data matrix made of features and a response). It then proceeds in various iterations using the feedback from the cost function, thus changing its parameters with values that gradually improve the initial random solution and lower the error.

Even though the optimization may take a large number of iterations before reaching a good mapping, it relies on changes that improve the response cost function most (lower error) during each iteration. Here’s an example of a complex optimization process with many local minima (the minimum points on the curve marked with letters) where the process can get stuck (it no longer continues after the deep minimum marked with an asterisk) and cannot continue its descent.

cost function
A plotting of parameter data against the output of the cost function.

You can visualize the optimization process as a walk in high mountains, with the parameters being the different paths to descend to the valley. A gradient descent optimization occurs at each step. At each iteration, the algorithm chooses the path that reduces error the most, regardless of the direction taken. The idea is that if steps aren’t too large (causing the alogorithm to jump over the target), always following the most downward direction will result in finding the lowest place.

Unfortunately, this result doesn’t always occur because the algorithm can arrive at intermediate valleys, creating the illusion that it has reached the target. However, in most cases, gradient descent leads the machine learning algorithm to discover the right hypothesis for successfully mapping the problem. A different starting point can make the difference. Starting point x1 ends toward a local minimum, whereas points x2 and x3 reach the global minimum.

starting point on outcome
Visualizing the effect of starting point on outcome.

In an optimization process, you distinguish between different optimization outcomes. You can have a global minimum that’s truly the minimum error from the cost function, and you can have many local minima — solutions that seem to produce the minimum error but actually don’t (the intermediate valleys where the algorithm gets stuck). As a remedy, given the optimization process’s random initialization, running the optimization many times is good practice. This means trying different sequences of descending paths and not getting stuck in the same local minimum.

About This Article

This article is from the book:

About the book authors:

John Mueller has produced 114 books and more than 600 articles on topics ranging from functional programming techniques to working with Amazon Web Services (AWS). Luca Massaron, a Google Developer Expert (GDE),??interprets big data and transforms it into smart data through simple and effective data mining and machine learning techniques.

John Mueller has produced 114 books and more than 600 articles on topics ranging from functional programming techniques to working with Amazon Web Services (AWS). Luca Massaron, a Google Developer Expert (GDE),??interprets big data and transforms it into smart data through simple and effective data mining and machine learning techniques.

This article can be found in the category: