Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 11 additions & 15 deletions docs/add_your_parallel.md
Original file line number Diff line number Diff line change
@@ -1,27 +1,26 @@
# Add Your Own Parallelism
# Add your own parallelism

## Overview

To enable researchers and engineers to extend our framework to other novel large-scale distributed training algorithm
with less effort, we have decoupled the various components in the training lifecycle. You can implement your own
with less effort, we have decoupled various components in the training lifecycle. You can implement your own
parallelism by simply inheriting from the base class.

The main components are
The main components are:

1. `ProcessGroupInitializer`
2. `GradientHandler`
3. `Schedule`

## Process Group Initializer

Parallelism is often managed by process groups where processes involved in parallel computing are placed in the same
Parallelism is often managed by process groups where processes involved in the same parallel algorithm are placed in the same
process group. For different parallel algorithms, different process groups need to be created. ColossalAI provides a
global context for the user to easily manage their process groups. If you wish to add new process group, you can easily
global context for users to easily manage their process groups. If you wish to add new process group, you can easily
define a new class and set it in your configuration file. To define your own way of creating process groups, you can
follow the steps below to create new distributed initialization.

1. Add your parallel mode in `colossalai.context.parallel_mode.ParallelMode`
follow the steps below to create a new distributed initialization.

1. Add your parallel mode in `colossalai.context.parallel_mode.ParallelMode`.
```python
class ParallelMode(Enum):
GLOBAL = 'global'
Expand All @@ -34,11 +33,10 @@ follow the steps below to create new distributed initialization.
NEW_MODE = 'new_mode' # define your mode here
```

2. Create a `ProcessGroupInitializer`. You can refer to examples given in `colossal.context.dist_group_initializer`. The
2. Create a `ProcessGroupInitializer`. You can refer to examples given in `colossalai.context.dist_group_initializer`. The
first six arguments are fixed. `ParallelContext` will pass in these arguments for you. If you need to set other
arguments, you can add it behind like the `arg1, arg2` in the example below. Lastly, register your initializer to the
registry by adding the decorator `@DIST_GROUP_INITIALIZER.register_module`.

```python
# sample initializer class
@DIST_GROUP_INITIALIZER.register_module
Expand Down Expand Up @@ -84,18 +82,16 @@ follow the steps below to create new distributed initialization.
## Gradient Handler

Gradient handlers are objects which execute the all-reduce operations on parameters' gradients. As different all-reduce
strategies may be executed for different kinds of parallelism, the user can
inherit `colossal.engine.gradient_handler.BaseGradientHandler` to implement their strategies. Currently, the library
strategies may be executed for different kinds of parallelism, users can
inherit `colossalai.engine.gradient_handler.BaseGradientHandler` to implement their strategies. Currently, the library
uses the normal data parallel gradient handler which all-reduces the gradients across data parallel ranks. The data
parallel gradient handler is added to the engine automatically if data parallel is detected. You can add your own
gradient handler like below:

```python

from colossalai.registry import GRADIENT_HANDLER
from colossalai.engine import BaseGradientHandler


@GRADIENT_HANDLER.register_module
class YourGradientHandler(BaseGradientHandler):

Expand All @@ -116,5 +112,5 @@ dist_initializer = [

Schedule entails how to execute a forward and backward pass. Currently, ColossalAI provides pipeline and non-pipeline
schedules. If you want to modify how the forward and backward passes are executed, you can
inherit `colossalai.engine.BaseSchedule` and implement your idea. You can add your schedule to the engine before
inherit `colossalai.engine.BaseSchedule` and implement your idea. You can also add your schedule to the engine before
training.
103 changes: 103 additions & 0 deletions docs/add_your_parallel_zh.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
# 添加新的并行技术

为了方便科研人员和工程师们更方便地拓展我们的框架来兼容一些新的大规模分布式训练算法,我们对训练过程中的几个组件进行了解耦,您可以通过继承基类的方式
来实现新的并行技术。

主要的组件如下所示:

1. `ProcessGroupInitializer`
2. `GradientHandler`
3. `Schedule`

## 进程组初始化器

并行化一般是通过进程组来进行管理的,同属于一个并行化算法的进程将被分到一个进程组中,如果系统中存在多种不同的并行化技术,那么需要创建多个不同的进程组。
ColossalAI为用户提供了一个全局上下文变量来便捷地管理他们的进程组。如果您希望增加新的进程组,您可以定义一个新的类并且在您的配置文件中进行设置。下方的
代码块中介绍了如果在系统中加入您的新并行技术以及如何进行初始化。

1. 在`colossalai.context.parallel_mode.ParallelMode`中添加新的并行模式。
```python
class ParallelMode(Enum):
GLOBAL = 'global'
DATA = 'data'
PIPELINE = 'pipe'
PIPELINE_PREV = 'pipe_prev'
PIPELINE_NEXT = 'pipe_next'
...

NEW_MODE = 'new_mode' # define your mode here
```

2. 创建一个`ProcessGroupInitializer`的子类,您可以参考`colossalai.context.dist_group_initializer`中给出的例子。前六个参数将由`ParallelContext`
决定。如果您需要设置新的参数,您可以用新的参数替换下面例子中的`arg1`与`arg2`。最后,您需要使用`@DIST_GROUP_INITIALIZER.register_module`装饰器
在我们的注册表注册您的初始化器。
```python
# sample initializer class
@DIST_GROUP_INITIALIZER.register_module
class MyParallelInitializer(ProcessGroupInitializer):

def __init__(self,
rank: int,
world_size: int,
config: Config,
data_parallel_size: int,
pipeline_parlalel_size: int,
tensor_parallel_size: int,
arg1,
arg2):
super().__init__(rank, world_size, config)
self.arg1 = arg1
self.arg2 = arg2
# ... your variable init

def init_parallel_groups(self):
# initialize your process groups
pass
```

在此之后,您可以将您的初始化器插入到当前的mode-to-initialize映射`colossalai.constants.INITIALIZER_MAPPING`中,您也可以通过更改该文件来动态变更名称与
并行模式的映射。

```python
colossalai.constants.INITIALIZER_MAPPING['new_mode'] = 'MyParallelInitializer'
```

3. 在配置文件中设置您的初始化器,如果您的初始化器需要参数,您可以自行传入,下面的代码可以让`ParallelContext`来创建您的初始化器并初始化您需要的进程组。

```python
parallel = dict(
pipeline=dict(size=1),
tensor=dict(size=x, mode='new_mode') # this is where you enable your new parallel mode
)
```

## 梯度处理器

梯度处理器的功能是对模型参数的梯度进行all-reduce操作。由于不同的并行技术可能需要不同的all-reduce操作,用户们可以通过继承
`colossalai.engine.gradient_handler.BaseGradientHandler`来执行其个性化操作。目前,ColossalAI使用普通的数据并行梯度处理器,该处理器在所有的数据
并行rank上执行all-reduce操作,且当ColossalAI监测到当前系统使用了数据并行时,该处理器会被自动创建。您可以使用下方代码块中的代码添加您自定义的梯度处理器:

```python
from colossalai.registry import GRADIENT_HANDLER
from colossalai.engine import BaseGradientHandler

@GRADIENT_HANDLER.register_module
class YourGradientHandler(BaseGradientHandler):

def handle_gradient(self):
do_something()

```

在此之后,您可以在配置文件中指定您想要使用的梯度处理器。

```python
dist_initializer = [
dict(type='YourGradientHandler'),
]
```

## 调度器

调度器中指定了在前向传播和后向传播时需要执行哪些操作,ColossalAI提供了支持流水线和不支持流水线的调度器。如果您想要修改前向传播和后向传播的执行方式,您可以
继承`colossalai.engine.BaseSchedule`并实现您想要的操作。您也可以在训练模型之前将您的调度器添加到我们的引擎中来。
35 changes: 19 additions & 16 deletions docs/amp.md
Original file line number Diff line number Diff line change
@@ -1,24 +1,24 @@
# Mixed Precision Training
# Mixed precision training

In Colossal-AI, we have integrated different implementations of mixed precision training:
In ColossalAI, we have incorporated different implementations of mixed precision training:
1. torch.cuda.amp
2. apex.amp
3. tensor-parallel amp

The first two rely on the original implementation of [PyTorch](https://pytorch.org/docs/stable/amp.html)
(version 1.6 and above) and [Nvidia Apex](https://github.com/NVIDIA/apex). However, these two methods are not compatible
with tensor parallelism. This is because that tensors are split across devices in tensor parallelism, thus, it is needed
to communicate among different processes to check if inf or nan occurs throughout the whole model weights. For the mixed
precision training with tensor parallel, we adapted this feature from [Megatron-LM](https://github.com/NVIDIA/Megatron-LM).
with tensor parallelism. This is because that tensors are split across devices in tensor parallelism, thus, it is required
to communicate among different processes to check if `inf` or `nan` occurs in the whole model weights. For the mixed
precision training with tensor parallelism, we adapted this feature from [Megatron-LM](https://github.com/NVIDIA/Megatron-LM).

To use mixed precision training, you can easily specify the `fp16` field in the configuration file. Currently, torch and
apex amp cannot be guaranteed to work with tensor and pipeline parallelism, thus, only the last one is recommended if you
To use mixed precision training, you can easily specify the `fp16` field in the config file to be True. Currently, PyTorch and
Apex amp cannot be guaranteed to work with tensor and pipeline parallelism, thus, only the last one is recommended if you
are using hybrid parallelism.

## Torch AMP
## PyTorch AMP

PyTorch provides mixed precision training in version 1.6 and above. It provides an easy way to cast data to fp16 format
while keeping some operations such as reductions in fp32. You can configure the gradient scaler in the configuration.
PyTorch provides mixed precision training in version 1.6 and above. It provides an easy way to cast data to `fp16` format
while keeping some operations such as reductions in `fp32`. You can configure the gradient scaler in the config file.

```python
from colossalai.engine import AMP_TYPE
Expand All @@ -34,13 +34,14 @@ fp16=dict(
)
```


## Apex AMP

For this mode, we rely on the [Apex](https://nvidia.github.io/apex/) implementation for mixed precision training. We supported this plugin because it allows
for finer control on the granularity of mixed precision. For example, `O2` level (optimization level 2) will keep batch normalization in fp32.
For this mode, we rely on the [Apex](https://nvidia.github.io/apex/) implementation for mixed precision training. We support
this plugin because it allows for finer control on the granularity of mixed precision. For example, `O2` level (optimization level 2)
will keep batch normalization in `fp32`.

The following code block shows a config file for Apex AMP.

The configuration is like below.
```python
from colossalai.engine import AMP_TYPE

Expand All @@ -64,8 +65,10 @@ fp16 = dict(

## Tensor Parallel AMP

We leveraged the Megatron-LM implementation to achieve mixed precision training while maintaining compatibility with
complex tensor and pipeline parallel.
We leveraged the Megatron-LM implementation to achieve mixed precision training while maintaining compatibility with complex tensor
and pipeline parallelism.

The following conde block show a config file for this mode.

```python
from colossalai.engine import AMP_TYPE
Expand Down
79 changes: 79 additions & 0 deletions docs/amp_zh.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
# 混合精度训练

ColossalAI可以使用如下三种不同的混合精度训练方式:
1. torch.cuda.amp
2. apex.amp
3. 张量并行AMP

前两种混合精度训练方式依赖于[PyTorch](https://pytorch.org/docs/stable/amp.html)的原生实现(1.6或以上版本)以及
[Nvidia Apex](https://github.com/NVIDIA/apex),但这两种方法与张量并行并不兼容,因为在张量并行中我们需要将张量进行切分并保存在不同的设备上,
因此,实现兼容张量并行的混合精度训练需要在不同进程之间不断通信来交流`inf`以及`nan`是否存在于模型参数中,因此我们才用了
[Megatron-LM](https://github.com/NVIDIA/Megatron-LM)的实现方式。

您可以简单地将配置文件中的`fp16`字段设置为True来使用混合精度训练。目前,PyTorch与Apex的amp不能保证与张量和流水线并行兼容,因此,我们推荐您使用
最后一种混合精度训练方式。

## PyTorch AMP

PyTorch在1.6及以上版本中提供了混合精度训练,其可以在保持一些操作的精度为`fp32`的同时,将数据转换成`fp16`格式,您可以在配置文件中配置使用。

```python
from colossalai.engine import AMP_TYPE

fp16=dict(
mode=AMP_TYPE.TORCH,
# below are default values for grad scaler
init_scale=2.**16,
growth_factor=2.0,
backoff_factor=0.5,
growth_interval=2000,
enabled=True
)
```

## Apex AMP

我们使用了[Apex](https://nvidia.github.io/apex/)中的混合精度训练,因为该模式提供了细粒度的混合精度控制,例如,`O2`级(第二级优化器)将会保持
批标准化在`fp32`上进行。下面的代码块展示了使用Apex AMP的配置文件。

```python
from colossalai.engine import AMP_TYPE

fp16 = dict(
mode=AMP_TYPE.APEX,
# below are the default values
enabled=True,
opt_level='O1',
cast_model_type=None,
patch_torch_functions=None,
keep_batchnorm_fp32=None,
master_weights=None,
loss_scale=None,
cast_model_outputs=None,
num_losses=1,
verbosity=1,
min_loss_scale=None,
max_loss_scale=16777216.0
)
```

## 张量并行AMP

我们借鉴了Megatron-LM的混合精度训练实现,该实现方式与张量并行与流水线并行相兼容。下面的代码块展示了使用张量并行AMP的配置文件。

```python
from colossalai.engine import AMP_TYPE

fp16 = dict(
mode=AMP_TYPE.PARALLEL,
# below are the default values
clip_grad=0,
log_num_zeros_in_grad=False,
initial_scale=2 ** 32,
min_scale=1,
growth_factor=2,
backoff_factor=0.5,
growth_interval=1000,
hysteresis=2
)
```
2 changes: 1 addition & 1 deletion docs/config.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Config file

Here is an example config file of training ViT on cifar:
Here is a config file example showing how to train a ViT model on the CIFAR10 dataset using ColossalAI:

```python
# build train_dataset and train_dataloader from this dictionary
Expand Down
Loading