Update 5_data_parallel_tutorial.ipynb

This commit is contained in:
Muli Yang 2019-02-19 17:32:23 +08:00 committed by GitHub
parent 4de06da2a1
commit e2b5e871ff
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -14,13 +14,13 @@
"metadata": {},
"source": [
"\n",
"数据并行(选读)\n",
"数据并行(选读)\n",
"==========================\n",
"**Authors**: `Sung Kim <https://github.com/hunkim>`_ and `Jenny Kang <https://github.com/jennykang>`_\n",
"**Authors**: [Sung Kim](https://github.com/hunkim) and [Jenny Kang](https://github.com/jennykang)\n",
"\n",
"在这个教程里,我们将学习如何使用``DataParallel``来使用多GPU. \n",
"在这个教程里,我们将学习如何使用 ``DataParallel`` 来使用多GPU。 \n",
"\n",
"PyTorch非常容易的就可以使用多GPU,用如下方式把一个模型放到GPU上\n",
"PyTorch非常容易就可以使用多GPU用如下方式把一个模型放到GPU上\n",
"\n",
"```python\n",
"\n",
@ -28,12 +28,12 @@
" model.to(device)\n",
"```\n",
" GPU:\n",
"然后复制所有的张量到GPU上\n",
"然后复制所有的张量到GPU上\n",
"```python\n",
"\n",
" mytensor = my_tensor.to(device)\n",
"```\n",
"请注意,只调用``my_tensor.to(device)``并没有复制张量到GPU上,而是返回了一个copy。所以你需要把它赋值给一个新的张量并在GPU上使用这个张量。\n",
"请注意,只调用``my_tensor.to(device)``并没有复制张量到GPU上而是返回了一个copy。所以你需要把它赋值给一个新的张量并在GPU上使用这个张量。\n",
"\n",
"在多GPU上执行前向和反向传播是自然而然的事。\n",
"但是PyTorch默认将只是用一个GPU。\n",
@ -104,7 +104,7 @@
"虚拟数据集\n",
"-------------\n",
"\n",
"制作一个虚拟(随机)数据集\n",
"制作一个虚拟(随机)数据集\n",
"你只需实现 `__getitem__`\n",
"\n",
"\n"