pytorch-handbook/chapter1/1_tensor_tutorial.ipynb
2019-09-30 23:09:39 +08:00

622 lines
13 KiB
Plaintext
Raw Permalink Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"%matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"PyTorch是什么?\n",
"================\n",
"\n",
"基于Python的科学计算包服务于以下两种场景:\n",
"\n",
"- 作为NumPy的替代品可以使用GPU的强大计算能力\n",
"- 提供最大的灵活性和高速的深度学习研究平台\n",
" \n",
"\n",
"开始\n",
"---------------\n",
"\n",
"Tensors张量\n",
"\n",
"Tensors与Numpy中的 ndarrays类似但是在PyTorch中\n",
"Tensors 可以使用GPU进行计算.\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from __future__ import print_function\n",
"import torch"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"创建一个 5x3 矩阵, 但是未初始化:\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"tensor([[0.0000, 0.0000, 0.0000],\n",
" [0.0000, 0.0000, 0.0000],\n",
" [0.0000, 0.0000, 0.0000],\n",
" [0.0000, 0.0000, 0.0000],\n",
" [0.0000, 0.0000, 0.0000]])\n"
]
}
],
"source": [
"x = torch.empty(5, 3)\n",
"print(x)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"创建一个随机初始化的矩阵:\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"tensor([[0.6972, 0.0231, 0.3087],\n",
" [0.2083, 0.6141, 0.6896],\n",
" [0.7228, 0.9715, 0.5304],\n",
" [0.7727, 0.1621, 0.9777],\n",
" [0.6526, 0.6170, 0.2605]])\n"
]
}
],
"source": [
"x = torch.rand(5, 3)\n",
"print(x)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"创建一个0填充的矩阵数据类型为long:\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"tensor([[0, 0, 0],\n",
" [0, 0, 0],\n",
" [0, 0, 0],\n",
" [0, 0, 0],\n",
" [0, 0, 0]])\n"
]
}
],
"source": [
"x = torch.zeros(5, 3, dtype=torch.long)\n",
"print(x)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"创建tensor并使用现有数据初始化:\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"tensor([5.5000, 3.0000])\n"
]
}
],
"source": [
"x = torch.tensor([5.5, 3])\n",
"print(x)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"根据现有的张量创建张量。 这些方法将重用输入张量的属性,例如, dtype除非设置新的值进行覆盖"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"tensor([[1., 1., 1.],\n",
" [1., 1., 1.],\n",
" [1., 1., 1.],\n",
" [1., 1., 1.],\n",
" [1., 1., 1.]], dtype=torch.float64)\n",
"tensor([[ 0.5691, -2.0126, -0.4064],\n",
" [-0.0863, 0.4692, -1.1209],\n",
" [-1.1177, -0.5764, -0.5363],\n",
" [-0.4390, 0.6688, 0.0889],\n",
" [ 1.3334, -1.1600, 1.8457]])\n"
]
}
],
"source": [
"x = x.new_ones(5, 3, dtype=torch.double) # new_* 方法来创建对象\n",
"print(x)\n",
"\n",
"x = torch.randn_like(x, dtype=torch.float) # 覆盖 dtype!\n",
"print(x) # 对象的size 是相同的,只是值和类型发生了变化"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"获取 size\n",
"\n",
"***译者注使用size方法与Numpy的shape属性返回的相同张量也支持shape属性后面会详细介绍***\n"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"torch.Size([5, 3])\n"
]
}
],
"source": [
"print(x.size())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<div class=\"alert alert-info\"><h4>Note</h4><p>``torch.Size`` 返回值是 tuple类型, 所以它支持tuple类型的所有操作.</p></div>\n",
"\n",
"操作\n",
"\n",
"操作有多种语法。 \n",
"\n",
"我们将看一下加法运算。\n",
"\n",
"加法1:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"tensor([[ 0.7808, -1.4388, 0.3151],\n",
" [-0.0076, 1.0716, -0.8465],\n",
" [-0.8175, 0.3625, -0.2005],\n",
" [ 0.2435, 0.8512, 0.7142],\n",
" [ 1.4737, -0.8545, 2.4833]])\n"
]
}
],
"source": [
"y = torch.rand(5, 3)\n",
"print(x + y)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"加法2\n"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"tensor([[ 0.7808, -1.4388, 0.3151],\n",
" [-0.0076, 1.0716, -0.8465],\n",
" [-0.8175, 0.3625, -0.2005],\n",
" [ 0.2435, 0.8512, 0.7142],\n",
" [ 1.4737, -0.8545, 2.4833]])\n"
]
}
],
"source": [
"print(torch.add(x, y))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"提供输出tensor作为参数\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"tensor([[ 0.7808, -1.4388, 0.3151],\n",
" [-0.0076, 1.0716, -0.8465],\n",
" [-0.8175, 0.3625, -0.2005],\n",
" [ 0.2435, 0.8512, 0.7142],\n",
" [ 1.4737, -0.8545, 2.4833]])\n"
]
}
],
"source": [
"result = torch.empty(5, 3)\n",
"torch.add(x, y, out=result)\n",
"print(result)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"替换\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"tensor([[ 0.7808, -1.4388, 0.3151],\n",
" [-0.0076, 1.0716, -0.8465],\n",
" [-0.8175, 0.3625, -0.2005],\n",
" [ 0.2435, 0.8512, 0.7142],\n",
" [ 1.4737, -0.8545, 2.4833]])\n"
]
}
],
"source": [
"# adds x to y\n",
"y.add_(x)\n",
"print(y)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<div class=\"alert alert-info\"><h4>Note</h4><p>任何 以``_`` 结尾的操作都会用结果替换原变量.\n",
" 例如: ``x.copy_(y)``, ``x.t_()``, 都会改变 ``x``.</p></div>\n",
"\n",
"你可以使用与NumPy索引方式相同的操作来进行对张量的操作\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"tensor([-2.0126, 0.4692, -0.5764, 0.6688, -1.1600])\n"
]
}
],
"source": [
"print(x[:, 1])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"``torch.view``: 可以改变张量的维度和大小\n",
"\n",
"***译者注torch.view 与Numpy的reshape类似***\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"torch.Size([4, 4]) torch.Size([16]) torch.Size([2, 8])\n"
]
}
],
"source": [
"x = torch.randn(4, 4)\n",
"y = x.view(16)\n",
"z = x.view(-1, 8) # size -1 从其他维度推断\n",
"print(x.size(), y.size(), z.size())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"如果你有只有一个元素的张量,使用``.item()``来得到Python数据类型的数值\n"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"tensor([-0.2368])\n",
"-0.23680149018764496\n"
]
}
],
"source": [
"x = torch.randn(1)\n",
"print(x)\n",
"print(x.item())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Read later:**\n",
"\n",
"\n",
" 100+ Tensor operations, including transposing, indexing, slicing,\n",
" mathematical operations, linear algebra, random numbers, etc.,\n",
" are described\n",
" `here <https://pytorch.org/docs/torch>`_.\n",
"\n",
"NumPy 转换\n",
"------------\n",
"\n",
"将一个Torch Tensor转换为NumPy数组是一件轻松的事反之亦然。\n",
"\n",
"Torch Tensor与NumPy数组共享底层内存地址修改一个会导致另一个的变化。\n",
"\n",
"将一个Torch Tensor转换为NumPy数组\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"tensor([1., 1., 1., 1., 1.])\n"
]
}
],
"source": [
"a = torch.ones(5)\n",
"print(a)"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[1. 1. 1. 1. 1.]\n"
]
}
],
"source": [
"b = a.numpy()\n",
"print(b)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"观察numpy数组的值是如何改变的。\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"tensor([2., 2., 2., 2., 2.])\n",
"[2. 2. 2. 2. 2.]\n"
]
}
],
"source": [
"a.add_(1)\n",
"print(a)\n",
"print(b)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
" NumPy Array 转化成 Torch Tensor\n",
"\n",
"使用from_numpy自动转化\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[2. 2. 2. 2. 2.]\n",
"tensor([2., 2., 2., 2., 2.], dtype=torch.float64)\n"
]
}
],
"source": [
"import numpy as np\n",
"a = np.ones(5)\n",
"b = torch.from_numpy(a)\n",
"np.add(a, 1, out=a)\n",
"print(a)\n",
"print(b)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"所有的 Tensor 类型默认都是基于CPU CharTensor 类型不支持到\n",
"NumPy 的转换.\n",
"CUDA 张量\n",
"------------\n",
"\n",
"使用``.to`` 方法 可以将Tensor移动到任何设备中\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {
"scrolled": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"tensor([0.7632], device='cuda:0')\n",
"tensor([0.7632], dtype=torch.float64)\n"
]
}
],
"source": [
"# is_available 函数判断是否有cuda可以使用\n",
"# ``torch.device``将张量移动到指定的设备中\n",
"if torch.cuda.is_available():\n",
" device = torch.device(\"cuda\") # a CUDA 设备对象\n",
" y = torch.ones_like(x, device=device) # 直接从GPU创建张量\n",
" x = x.to(device) # 或者直接使用``.to(\"cuda\")``将张量移动到cuda中\n",
" z = x + y\n",
" print(z)\n",
" print(z.to(\"cpu\", torch.double)) # ``.to`` 也会对变量的类型做更改"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Pytorch for Deeplearning",
"language": "python",
"name": "pytorch"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
}
},
"nbformat": 4,
"nbformat_minor": 1
}