Skip to content

Commit b9c8dd7

Browse files
authored
[CodeStyle][Typos][O-[1-12],S-2,S-4,S-5,S-7] Fix typo (#7634)
* fix O * fix samle,Sovler,Simle * Segment * Update basic_usage_en.md
1 parent b9a05c6 commit b9c8dd7

File tree

18 files changed

+22
-38
lines changed

18 files changed

+22
-38
lines changed

_typos.toml

Lines changed: 0 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -28,22 +28,6 @@ datas = "datas"
2828
feeded = "feeded"
2929

3030
# These words need to be fixed
31-
Operaton = "Operaton"
32-
Optimizaing = "Optimizaing"
33-
Optimzier = "Optimzier"
34-
Setment = "Setment"
35-
Simle = "Simle"
36-
Sovler = "Sovler"
37-
occurence = "occurence"
38-
opeartor = "opeartor"
39-
opeartors = "opeartors"
40-
operaters = "operaters"
41-
optmization = "optmization"
42-
outpu = "outpu"
43-
outpus = "outpus"
44-
overrided = "overrided"
45-
overwrited = "overwrited"
46-
samle = "samle"
4731
schedual = "schedual"
4832
secenarios = "secenarios"
4933
sematic = "sematic"

docs/api/gen_doc.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@
3535
# "short_name":"", # without module name
3636
# "module_name":"", # the module of the real api belongs to
3737
# "display":True/Flase, # consider the not_display_doc_list and the display_doc_list
38-
# "has_overwrited_doc":True/False #
38+
# "has_overwritten_doc":True/False #
3939
# "doc_filename" # document filename without suffix
4040
# "suggested_name":"", # the shortest name in all_names
4141
# }

docs/design/concurrent/parallel_do.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ AddOutput(kOutputs, "Outputs needed to be merged from different devices").AsDupl
1515
AddOutput(kParallelScopes,
1616
"Scopes for all local variables in forward pass. One scope for each device");
1717
AddAttr<framework::BlockDesc *>(kParallelBlock,
18-
"List of operaters to be executed in parallel");
18+
"List of operators to be executed in parallel");
1919
```
2020
2121
A vanilla implementation of parallel_do can be shown as the following (`|` means single thread and
@@ -94,7 +94,7 @@ There are serial places we can make this parallel_do faster.
9494

9595
### forward: split input onto different devices
9696

97-
If the input of the parallel_do is independent from any prior opeartors, we can avoid this step by
97+
If the input of the parallel_do is independent from any prior operators, we can avoid this step by
9898
prefetching the input onto different devices in a separate background thread. And the python code
9999
looks like this.
100100
```python

docs/design/data_type/float16.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -101,7 +101,7 @@ In Fluid, a neural network is represented as a protobuf message called [ProgramD
101101
### Operator level requirement
102102
Each operator has many kernels for different data types, devices, and library types. The operator will select the appropriate kernel to run based on, among other things, the data type of the input variables. By default, every Fluid operator has a float data type kernel that takes float variables as input and generates float output.
103103

104-
This means that if we provide float input to the first operator in a program, then each opeartor will use float kernel to compute float output and send it as input to the next operator to trigger the float kernel. Overall, the program will run in float mode and give us a final output of float data type.
104+
This means that if we provide float input to the first operator in a program, then each operator will use float kernel to compute float output and send it as input to the next operator to trigger the float kernel. Overall, the program will run in float mode and give us a final output of float data type.
105105

106106
The same principle applies if we want a program to run in float16 mode. We provide input variable of float16 data type to the first operator, and then one by one, each operator in the program will run the float16 kernel (provided that each operator in this program has float16 kernels registered) until we finally obtain a float16 output variable.
107107

docs/design/dynamic_rnn/rnn_design.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -198,7 +198,7 @@ std::vector<SortedSeqItem> SortBySeqLen(const LODTensor& tensor);
198198
由于输入序列的顺序变化,以下现有的接口需要针对性地修改:
199199

200200
- InitMemories, memory 需要根据 `sorted_seqs` 重新排列
201-
- SetmentInputs
201+
- SegmentInputs
202202
- ConcatOutputs
203203

204204
此外,由于 `sorted_seqs` 需要被 `RecurrentGradientOp` 复用,因此会变成 `RecurrentOp` 一个新的 output 输出,

docs/design/dynamic_rnn/rnn_design_en.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -136,7 +136,7 @@ std::vector<SortedSeqItem> SortBySeqLen(const LODTensor& tensor);
136136
Due to the sequence of input sequences, the following existing interfaces need to be modified:
137137
138138
- InitMemories, memory needs to be rearranged according to `sorted_seqs`
139-
- SetmentInputs
139+
- SegmentInputs
140140
- ConcatOutputs
141141
142142
In addition, because `sorted_seqs` needs to be multiplexed with `RecurrentGradientOp`, it will become a new output of `RecurrentOp`.

docs/design/memory/memory_optimization.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ In former control flow graph, the out-edges of node 5 are 5 --> 6 and 5 --> 2, a
7979

8080
- Uses and Defs
8181

82-
An assignmemt to a variable or temporary defines that variable. An occurence of a variable on the right-hand side of an assignment(or in other expressions) uses the variable. We can define the *def* of a variable as the set of graph nodes that define it; or the *def* of a graph node as the set of variables that it defines; and the similarly for the *use* of a variable or graph node. In former control flow graph, *def(3)* = {c}, *use(3)* = {b, c}.
82+
An assignmemt to a variable or temporary defines that variable. An occurrence of a variable on the right-hand side of an assignment(or in other expressions) uses the variable. We can define the *def* of a variable as the set of graph nodes that define it; or the *def* of a graph node as the set of variables that it defines; and the similarly for the *use* of a variable or graph node. In former control flow graph, *def(3)* = {c}, *use(3)* = {b, c}.
8383

8484
- Liveness
8585

docs/design/mkldnn/int8/QAT/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ Notes:
6262
```... → input1 → conv2d → output1 → batch_norm → output2 → relu → output3 → ...```
6363
and we want to quantize the `conv2d` op, then after applying FP32 optimizations the sequence will become
6464
```... → input1 → conv2d → output3 → ...```
65-
and the quantization scales have to be collected for the `input1` and `outpu3` tensors in the Quant model.
65+
and the quantization scales have to be collected for the `input1` and `output3` tensors in the Quant model.
6666
2. Quantization of the following operators is supported: `conv2d`, `depthwise_conv2d`, `mul`, `fc`, `matmul`, `pool2d`, `reshape2`, `transpose2`, `concat`.
6767
3. The longest sequence of consecutive quantizable operators in the model, the biggest performance boost can be achieved through quantization:
6868
```... → conv2d → conv2d → pool2d → conv2d → conv2d → ...```

docs/design/modules/optimizer.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -72,7 +72,7 @@ class Optimizer:
7272
parameters_and_grads: a list of (variable, gradient) pair to update.
7373
7474
Returns:
75-
optmization_op_list: a list of optimization operator that will update parameter using gradient.
75+
optimization_op_list: a list of optimization operator that will update parameter using gradient.
7676
"""
7777
return None
7878

docs/dev_guides/amp_precision/amp_test_dev_guide_cn.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@
7373

7474
首先需要对输入数据进行计算,对于复杂一些的计算,可能会使得 setUp 函数过分冗长,可以写成额外的函数, **如代码 1-1 的第 13 行**
7575

76-
outpus 部分需要传入由 numpy 计算出的参考结果。
76+
outputs 部分需要传入由 numpy 计算出的参考结果。
7777

7878
**代码 1-1**
7979

@@ -283,7 +283,7 @@ BF16 在传入输入和输入参考值时需要调用**convert_float_to_uint16**
283283

284284
3. 设置 self.outputs。**如代码 2-1 的第 15 行所示。**
285285

286-
outpus 部分需要传入 Uint16 格式的参考结果。可使用**convert_float_to_uint16**完成转换。
286+
outputs 部分需要传入 Uint16 格式的参考结果。可使用**convert_float_to_uint16**完成转换。
287287

288288
**代码 2-1**
289289

0 commit comments

Comments
 (0)