427 lines
12 KiB
Plaintext
427 lines
12 KiB
Plaintext
{
|
||
"cells": [
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 1,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"%matplotlib inline"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"\n",
|
||
"Autograd: 自动求导机制\n",
|
||
"===================================\n",
|
||
"\n",
|
||
"PyTorch 中所有神经网络的核心是 ``autograd`` 包。\n",
|
||
"我们先简单介绍一下这个包,然后训练第一个简单的神经网络。\n",
|
||
"\n",
|
||
"``autograd``包为张量上的所有操作提供了自动求导。\n",
|
||
"它是一个在运行时定义的框架,这意味着反向传播是根据你的代码来确定如何运行,并且每次迭代可以是不同的。\n",
|
||
"\n",
|
||
"\n",
|
||
"示例\n",
|
||
"\n",
|
||
"张量(Tensor)\n",
|
||
"--------\n",
|
||
"\n",
|
||
"``torch.Tensor``是这个包的核心类。如果设置\n",
|
||
"``.requires_grad`` 为 ``True``,那么将会追踪所有对于该张量的操作。 \n",
|
||
"当完成计算后通过调用 ``.backward()``,自动计算所有的梯度,\n",
|
||
"这个张量的所有梯度将会自动积累到 ``.grad`` 属性。\n",
|
||
"\n",
|
||
"要阻止张量跟踪历史记录,可以调用``.detach()``方法将其与计算历史记录分离,并禁止跟踪它将来的计算记录。\n",
|
||
"\n",
|
||
"为了防止跟踪历史记录(和使用内存),可以将代码块包装在``with torch.no_grad():``中。\n",
|
||
"在评估模型时特别有用,因为模型可能具有`requires_grad = True`的可训练参数,但是我们不需要梯度计算。\n",
|
||
"\n",
|
||
"在自动梯度计算中还有另外一个重要的类``Function``.\n",
|
||
"\n",
|
||
"``Tensor`` and ``Function`` are interconnected and build up an acyclic\n",
|
||
"graph, that encodes a complete history of computation. Each tensor has\n",
|
||
"a ``.grad_fn`` attribute that references a ``Function`` that has created\n",
|
||
"the ``Tensor`` (except for Tensors created by the user - their\n",
|
||
"``grad_fn is None``).\n",
|
||
"\n",
|
||
"``Tensor`` 和 ``Function``互相连接并生成一个非循环图,它表示和存储了完整的计算历史。\n",
|
||
"每个张量都有一个``.grad_fn``属性,这个属性引用了一个创建了``Tensor``的``Function``(除非这个张量是用户手动创建的,即,这个张量的\n",
|
||
"``grad_fn`` 是 ``None``)。\n",
|
||
"\n",
|
||
"如果需要计算导数,你可以在``Tensor``上调用``.backward()``。 \n",
|
||
"如果``Tensor``是一个标量(即它包含一个元素数据)则不需要为``backward()``指定任何参数,\n",
|
||
"但是如果它有更多的元素,你需要指定一个``gradient`` 参数来匹配张量的形状。\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"***译者注:在其他的文章中你可能会看到说将Tensor包裹到Variable中提供自动梯度计算,Variable 这个在0.41版中已经被标注为过期了,现在可以直接使用Tensor,官方文档在这里:***\n",
|
||
"(https://pytorch.org/docs/stable/autograd.html#variable-deprecated) \n",
|
||
"\n",
|
||
"具体的后面会有详细说明"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 2,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"import torch"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"创建一个张量并设置 requires_grad=True 用来追踪他的计算历史\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 3,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"tensor([[1., 1.],\n",
|
||
" [1., 1.]], requires_grad=True)\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"x = torch.ones(2, 2, requires_grad=True)\n",
|
||
"print(x)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"对张量进行操作:\n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 4,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"tensor([[3., 3.],\n",
|
||
" [3., 3.]], grad_fn=<AddBackward0>)\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"y = x + 2\n",
|
||
"print(y)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"结果``y``已经被计算出来了,所以,``grad_fn``已经被自动生成了。\n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 5,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"<AddBackward0 object at 0x000002004F7CC248>\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"print(y.grad_fn)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"对y进行一个操作\n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 6,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"tensor([[27., 27.],\n",
|
||
" [27., 27.]], grad_fn=<MulBackward0>) tensor(27., grad_fn=<MeanBackward0>)\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"z = y * y * 3\n",
|
||
"out = z.mean()\n",
|
||
"\n",
|
||
"print(z, out)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"``.requires_grad_( ... )`` 可以改变现有张量的 ``requires_grad``属性。\n",
|
||
"如果没有指定的话,默认输入的flag是 ``False``。\n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 7,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"False\n",
|
||
"True\n",
|
||
"<SumBackward0 object at 0x000002004F7D5608>\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"a = torch.randn(2, 2)\n",
|
||
"a = ((a * 3) / (a - 1))\n",
|
||
"print(a.requires_grad)\n",
|
||
"a.requires_grad_(True)\n",
|
||
"print(a.requires_grad)\n",
|
||
"b = (a * a).sum()\n",
|
||
"print(b.grad_fn)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"梯度\n",
|
||
"---------\n",
|
||
"反向传播\n",
|
||
"因为 ``out``是一个纯量(scalar),``out.backward()`` 等于``out.backward(torch.tensor(1))``。\n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 8,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"out.backward()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"print gradients d(out)/dx\n",
|
||
"\n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 9,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"tensor([[4.5000, 4.5000],\n",
|
||
" [4.5000, 4.5000]])\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"print(x.grad)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"得到矩阵 ``4.5``.将 ``out``叫做\n",
|
||
"*Tensor* “$o$”.\n",
|
||
"\n",
|
||
"得到 $o = \\frac{1}{4}\\sum_i z_i$,\n",
|
||
"$z_i = 3(x_i+2)^2$ 和 $z_i\\bigr\\rvert_{x_i=1} = 27$.\n",
|
||
"\n",
|
||
"因此,\n",
|
||
"$\\frac{\\partial o}{\\partial x_i} = \\frac{3}{2}(x_i+2)$, 则\n",
|
||
"$\\frac{\\partial o}{\\partial x_i}\\bigr\\rvert_{x_i=1} = \\frac{9}{2} = 4.5$.\n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"在数学上,如果我们有向量值函数 $\\vec{y} = f(\\vec{x})$ ,且 $\\vec{y}$ 关于 $\\vec{x}$ 的梯度是一个雅可比矩阵(Jacobian matrix):\n",
|
||
"\n",
|
||
"$J = \\begin{pmatrix} \\frac{\\partial y_{1}}{\\partial x_{1}} & \\cdots & \\frac{\\partial y_{1}}{\\partial x_{n}} \\\\ \\vdots & \\ddots & \\vdots \\\\ \\frac{\\partial y_{m}}{\\partial x_{1}} & \\cdots & \\frac{\\partial y_{m}}{\\partial x_{n}} \\end{pmatrix}$\n",
|
||
"\n",
|
||
"一般来说,`torch.autograd`就是用来计算vector-Jacobian product的工具。也就是说,给定任一向量 $v=(v_{1}\\;v_{2}\\;\\cdots\\;v_{m})^{T}$ ,计算 $v^{T}\\cdot J$ ,如果 $v$ 恰好是标量函数 $l=g(\\vec{y})$ 的梯度,也就是说 $v=(\\frac{\\partial l}{\\partial y_{1}}\\;\\cdots\\;\\frac{\\partial l}{\\partial y_{m}})^{T}$,那么根据链式法则,vector-Jacobian product 是 $l$ 关于 $\\vec{x}$ 的梯度:\n",
|
||
"\n",
|
||
"$J^{T}\\cdot v = \\begin{pmatrix} \\frac{\\partial y_{1}}{\\partial x_{1}} & \\cdots & \\frac{\\partial y_{m}}{\\partial x_{1}} \\\\ \\vdots & \\ddots & \\vdots \\\\ \\frac{\\partial y_{1}}{\\partial x_{n}} & \\cdots & \\frac{\\partial y_{m}}{\\partial x_{n}} \\end{pmatrix} \\begin{pmatrix} \\frac{\\partial l}{\\partial y_{1}}\\\\ \\vdots \\\\ \\frac{\\partial l}{\\partial y_{m}} \\end{pmatrix} = \\begin{pmatrix} \\frac{\\partial l}{\\partial x_{1}}\\\\ \\vdots \\\\ \\frac{\\partial l}{\\partial x_{n}} \\end{pmatrix}$\n",
|
||
"\n",
|
||
"(注意,$v^{T}\\cdot J$ 给出了一个行向量,可以通过 $J^{T}\\cdot v$ 将其视为列向量)\n",
|
||
"\n",
|
||
"vector-Jacobian product 这种特性使得将外部梯度返回到具有非标量输出的模型变得非常方便。\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"现在让我们来看一个vector-Jacobian product的例子\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 10,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"tensor([ 293.4463, 50.6356, 1031.2501], grad_fn=<MulBackward0>)\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"x = torch.randn(3, requires_grad=True)\n",
|
||
"\n",
|
||
"y = x * 2\n",
|
||
"while y.data.norm() < 1000:\n",
|
||
" y = y * 2\n",
|
||
"\n",
|
||
"print(y)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"在这个情形中,`y`不再是个标量。`torch.autograd`无法直接计算出完整的雅可比行列,但是如果我们只想要vector-Jacobian product,只需将向量作为参数传入`backward`:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 11,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"tensor([5.1200e+01, 5.1200e+02, 5.1200e-02])\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"gradients = torch.tensor([0.1, 1.0, 0.0001], dtype=torch.float)\n",
|
||
"y.backward(gradients)\n",
|
||
"\n",
|
||
"print(x.grad)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"如果``.requires_grad=True``但是你又不希望进行autograd的计算,\n",
|
||
"那么可以将变量包裹在 ``with torch.no_grad()``中:\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 12,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"True\n",
|
||
"True\n",
|
||
"False\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"print(x.requires_grad)\n",
|
||
"print((x ** 2).requires_grad)\n",
|
||
"\n",
|
||
"with torch.no_grad():\n",
|
||
" print((x ** 2).requires_grad)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"**稍后阅读:**\n",
|
||
"\n",
|
||
" ``autograd`` 和 ``Function`` 的官方文档 https://pytorch.org/docs/autograd\n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": []
|
||
}
|
||
],
|
||
"metadata": {
|
||
"kernelspec": {
|
||
"display_name": "Pytorch for Deeplearning",
|
||
"language": "python",
|
||
"name": "pytorch"
|
||
},
|
||
"language_info": {
|
||
"codemirror_mode": {
|
||
"name": "ipython",
|
||
"version": 3
|
||
},
|
||
"file_extension": ".py",
|
||
"mimetype": "text/x-python",
|
||
"name": "python",
|
||
"nbconvert_exporter": "python",
|
||
"pygments_lexer": "ipython3",
|
||
"version": "3.7.5"
|
||
}
|
||
},
|
||
"nbformat": 4,
|
||
"nbformat_minor": 1
|
||
}
|