机器学习的梯度下降

0 / 636

优化是机器学习的重要组成部分。几乎每种机器学习算法的核心都是优化算法。

Optimization is a big part of machine learning. Almost every machine learning algorithm has an optimization algorithm at it’s core.

Summary

In this post you discovered gradient descent for machine learning. You learned that:

  • Optimization is a big part of machine learning.
  • Gradient descent is a simple optimization procedure that you can use with many machine learning algorithms.
  • Batch gradient descent refers to calculating the derivative from all training data before calculating an update.
  • Stochastic gradient descent refers to calculating the derivative from each training data instance and calculating the update immediately.

在这篇文章中,您发现了用于机器学习的梯度下降。您了解到:

  • 优化是机器学习的重要组成部分。
  • 梯度下降是一个简单的优化过程,可以与许多机器学习算法一起使用。
  • 批次梯度下降是指在计算更新之前从所有训练数据计算导数。
  • 随机梯度下降是指从每个训练数据实例计算导数并立即计算更新。

Kick-start your project with my new book Master Machine Learning Algorithms
including step-by-step tutorials and the Excel Spreadsheet files for all examples.

用我的新书Master Machine Learning Algorithms来
启动您的项目,其中包括所有示例的
分步教程
Excel Spreadsheet
文件。

让我们开始吧。

Let’s get started.

Gradient Descent For Machine Learning
Gradient Descent For Machine Learning
Photo by Grand Canyon National Park

Gradient Descent 梯度下降

梯度下降是一种优化算法,用于查找使成本函数(cost)最小化的函数(f)的参数(系数)的值。

当无法解析地计算参数(例如,使用线性代数)并且必须通过优化算法进行搜索时,最好使用梯度下降。

Gradient descent is an optimization algorithm used to find the values of parameters (coefficients) of a function (f) that minimizes a cost function (cost).

Gradient descent is best used when the parameters cannot be calculated analytically (e.g. using linear algebra) and must be searched for by an optimization algorithm.

Intuition for Gradient Descent 梯度下降的直观描述

Think of a large bowl like what you would eat cereal out of or store fruit in. This bowl is a plot of the cost function (f).

想象一个大碗,就像您要吃掉谷物或在其中存放水果一样。该碗是成本函数(f)的图。

Large Bowl
Large Bowl
Photo by William Warby

A random position on the surface of the bowl is the cost of the current values of the coefficients (cost).

The bottom of the bowl is the cost of the best set of coefficients, the minimum of the function.

The goal is to continue to try different values for the coefficients, evaluate their cost and select new coefficients that have a slightly better (lower) cost.

Repeating this process enough times will lead to the bottom of the bowl and you will know the values of the coefficients that result in the minimum cost.
碗表面上的随机位置是系数的当前值的成本(成本)。

碗的底部是最佳系数集(函数的最小值)的成本。

目标是继续尝试使用不同的系数值,评估其成本并选择成本稍高(较低)的新系数。

重复此过程足够多的时间将导致碗的底部,您将知道导致最低成本的系数值。

Get your FREE Algorithms Mind Map

Machine Learning Algorithms Mind Map
Sample of the handy machine learning algorithms mind map.

I've created a handy mind map of 60+ algorithms organized by type.

Download it, print it and use it.

Download For Free
Also get exclusive access to the machine learning algorithms email mini-course.

Gradient Descent Procedure 梯度下降程序

该过程从函数的一个或多个系数的初始值开始。这些可以是0.0或较小的随机值。

The procedure starts off with initial values for the coefficient or coefficients for the function. These could be 0.0 or a small random value.

coefficient = 0.0

The cost of the coefficients is evaluated by plugging them into the function and calculating the cost.
系数的成本是通过将其插入函数并计算成本来评估的。

cost = f(coefficient)

or

cost = evaluate(f(coefficient))

The derivative of the cost is calculated. The derivative is a concept from calculus and refers to the slope of the function at a given point. We need to know the slope so that we know the direction (sign) to move the coefficient values in order to get a lower cost on the next iteration.
计算成本的导数。导数是微积分的概念,是指函数在给定点的斜率。我们需要知道斜率,以便知道移动系数值的方向(符号),以便在下一次迭代中获得较低的成本。

delta = derivative(cost)

Now that we know from the derivative which direction is downhill, we can now update the coefficient values. A learning rate parameter
(alpha) must be specified that controls how much the coefficients can change on each update.

coefficient = coefficient – (alpha * delta)

This process is repeated until the cost of the coefficients (cost) is 0.0 or close enough to zero to be good enough.

You can see how simple gradient descent is. It does require you to know the gradient of your cost function or the function you are optimizing, but besides that, it’s very straightforward. Next we will see how we can use this in machine learning algorithms.
重复该过程,直到系数的成本(cost)为0.0或足够接近零为止才足够好。

您可以看到梯度下降有多简单。它确实需要您了解成本函数或要优化的函数的梯度,但除此之外,它非常简单。接下来,我们将看到如何在机器学习算法中使用它。

Batch Gradient Descent for Machine Learning 机器学习的批次梯度下降

The goal of all supervised machine learning algorithms is to best estimate a target function (f) that maps input data (X) onto output variables (Y). This describes all classification and regression problems.

Some machine learning algorithms have coefficients that characterize the algorithms estimate for the target function (f). Different algorithms have different representations and different coefficients, but many of them require a process of optimization to find the set of coefficients that result in the best estimate of the target function.

Common examples of algorithms with coefficients that can be optimized using gradient descent are Linear Regression and Logistic Regression.

The evaluation of how close a fit a machine learning model estimates the target function can be calculated a number of different ways, often specific to the machine learning algorithm. The cost function involves evaluating the coefficients in the machine learning model by calculating a prediction for the model for each training instance in the dataset and comparing the predictions to the actual output values and calculating a sum or average error (such as the Sum of Squared Residuals or SSR in the case of linear regression).

From the cost function a derivative can be calculated for each coefficient so that it can be updated using exactly the update equation described above.

The cost is calculated for a machine learning algorithm over the entire training dataset for each iteration of the gradient descent algorithm. One iteration of the algorithm is called one batch and this form of gradient descent is referred to as batch gradient descent.

Batch gradient descent is the most commo