You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
| [TensorFlow 2.0 models on GLUE](#TensorFlow-2.0-Bert-models-on-GLUE) | Examples running BERT TensorFlow 2.0 model on the GLUE tasks.
9
+
|[Running on TPUs](#running-on-tpus)| Examples on running fine-tuning tasks on Google TPUs to accelerate workloads. |
9
10
|[Language Model fine-tuning](#language-model-fine-tuning)| Fine-tuning the library models for language modeling on a text dataset. Causal language modeling for GPT/GPT-2, masked language modeling for BERT/RoBERTa. |
10
11
|[Language Generation](#language-generation)| Conditional text generation using the auto-regressive models of the library: GPT, GPT-2, Transformer-XL and XLNet. |
11
12
|[GLUE](#glue)| Examples running BERT/XLM/XLNet/RoBERTa on the 9 GLUE tasks. Examples feature distributed training as well as half-precision. |
@@ -36,6 +37,48 @@ Quick benchmarks from the script (no other modifications):
36
37
37
38
Mixed precision (AMP) reduces the training time considerably for the same hardware and hyper-parameters (same batch size was used).
38
39
40
+
## Running on TPUs
41
+
42
+
You can accelerate your workloads on Google's TPUs. For information on how to setup your TPU environment refer to this
0 commit comments