Skip to content

Commit 2f286e1

Browse files
committed
Update docs
1 parent 15ed1ce commit 2f286e1

File tree

1 file changed

+8
-5
lines changed

1 file changed

+8
-5
lines changed

docs/source-pytorch/accelerators/gpu_intermediate.rst

Lines changed: 8 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -472,19 +472,22 @@ Distributed and 16-bit precision
472472
Below are the possible configurations we support.
473473

474474
+-------+---------+-----+-----+--------+-----------------------------------------------------------------------+
475-
| 1 GPU | 1+ GPUs | DP | DDP | 16-bit | command |
475+
| 1 GPU | 1+ GPUs | DDP | DP | 16-bit | command |
476476
+=======+=========+=====+=====+========+=======================================================================+
477477
| Y | | | | | `Trainer(accelerator="gpu", devices=1)` |
478478
+-------+---------+-----+-----+--------+-----------------------------------------------------------------------+
479479
| Y | | | | Y | `Trainer(accelerator="gpu", devices=1, precision=16)` |
480480
+-------+---------+-----+-----+--------+-----------------------------------------------------------------------+
481-
| | Y | Y | | | `Trainer(accelerator="gpu", devices=k, strategy='dp')` |
481+
| | Y | Y | | | `Trainer(accelerator="gpu", devices=k, strategy='ddp')` |
482482
+-------+---------+-----+-----+--------+-----------------------------------------------------------------------+
483-
| | Y | | Y | | `Trainer(accelerator="gpu", devices=k, strategy='ddp')` |
483+
| | Y | Y | | Y | `Trainer(accelerator="gpu", devices=k, strategy='ddp', precision=16)` |
484484
+-------+---------+-----+-----+--------+-----------------------------------------------------------------------+
485-
| | Y | | Y | Y | `Trainer(accelerator="gpu", devices=k, strategy='ddp', precision=16)` |
485+
| | Y | | Y | | `Trainer(accelerator="gpu", devices=k, strategy='dp')` |
486486
+-------+---------+-----+-----+--------+-----------------------------------------------------------------------+
487-
# FIXME(carlos): check native amp and DP
487+
| | Y | | Y | Y | `Trainer(accelerator="gpu", devices=k, strategy='dp', precision=16)` |
488+
+-------+---------+-----+-----+--------+-----------------------------------------------------------------------+
489+
490+
DDP and DP can also be used with 1 GPU, but there's no reason to do so other than debugging distributed-related issues.
488491

489492
Implement Your Own Distributed (DDP) training
490493
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

0 commit comments

Comments
 (0)