Skip to content

Commit 8719afa

Browse files
patil-surajsgugger
andauthored
CLIP (#11445)
* begin second draft * fix import, style * add loss * fix embeds, logits_scale, and projection * fix imports * add conversion script * add feature_extractor and processor * style * add tests for tokenizer, extractor and processor * add vision model tests * add weight init * add more tests * fix save_load test * model output, dosstrings, causal mask * config doc * add clip model tests * return dict * bigin integration test * add integration tests * fix-copies * fix init * Clip => CLIP * fix module name * docs * fix doc * output_dim => projection_dim * fix checkpoint names * remoe fast tokenizer file * fix conversion script * fix tests, quality * put causal mask on device * Apply suggestions from code review Co-authored-by: Sylvain Gugger <[email protected]> * fix attribute test * style * address sylvains comments * style * fix docstrings * add qucik_gelu in activations, docstrings * clean-up attention test * fix act fun * fix config * fix torchscript tests * even batch_size * remove comment * fix ouput tu_tuple * fix save load tests * fix add tokens test * add fast tokenizer * update copyright * new processor API * fix docs * docstrings * docs * fix doc * fix doc * fix tokenizer * fix import in doc example * Apply suggestions from code review Co-authored-by: Sylvain Gugger <[email protected]> * check types of config * valhalla => openai * load image using url * fix test * typo Co-authored-by: Sylvain Gugger <[email protected]>
1 parent 4ce6bcc commit 8719afa

25 files changed

+3848
-45
lines changed

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -200,6 +200,7 @@ Current number of checkpoints: ![](https://img.shields.io/endpoint?url=https://h
200200
1. **[BlenderbotSmall](https://huggingface.co/transformers/model_doc/blenderbot_small.html)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
201201
1. **[BORT](https://huggingface.co/transformers/model_doc/bort.html)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry.
202202
1. **[CamemBERT](https://huggingface.co/transformers/model_doc/camembert.html)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
203+
1. **[CLIP](https://huggingface.co/transformers/model_doc/camembert.html)** from (OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever.
203204
1. **[ConvBERT](https://huggingface.co/transformers/model_doc/convbert.html)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan.
204205
1. **[CPM](https://huggingface.co/transformers/model_doc/cpm.html)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun.
205206
1. **[CTRL](https://huggingface.co/transformers/model_doc/ctrl.html)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher.

docs/source/index.rst

Lines changed: 52 additions & 45 deletions
Large diffs are not rendered by default.

docs/source/model_doc/clip.rst

Lines changed: 154 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,154 @@
1+
..
2+
Copyright 2021 The HuggingFace Team. All rights reserved.
3+
4+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
5+
the License. You may obtain a copy of the License at
6+
7+
http://www.apache.org/licenses/LICENSE-2.0
8+
9+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
10+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
11+
specific language governing permissions and limitations under the License.
12+
13+
CLIP
14+
-----------------------------------------------------------------------------------------------------------------------
15+
16+
Overview
17+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
18+
19+
The CLIP model was proposed in `Learning Transferable Visual Models From Natural Language Supervision
20+
<https://arxiv.org/abs/2103.00020>`__ by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh,
21+
Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. CLIP
22+
(Contrastive Language-Image Pre-Training) is a neural network trained on a variety of (image, text) pairs. It can be
23+
instructed in natural language to predict the most relevant text snippet, given an image, without directly optimizing
24+
for the task, similarly to the zero-shot capabilities of GPT-2 and 3.
25+
26+
The abstract from the paper is the following:
27+
28+
*State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This
29+
restricted form of supervision limits their generality and usability since additional labeled data is needed to specify
30+
any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a
31+
much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes
32+
with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400
33+
million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference
34+
learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study
35+
the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks
36+
such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The
37+
model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need
38+
for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot
39+
without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained
40+
model weights at this https URL.*
41+
42+
Usage
43+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
44+
45+
CLIP is a multi-modal vision and language model. It can be used for image-text similarity and for zero-shot image
46+
classification. CLIP uses a ViT like transformer to get visual features and a causal language model to get the text
47+
features. Both the text and visual features are then projected to a latent space with identical dimension. The dot
48+
product between the projected image and text features is then used as a similar score.
49+
50+
To feed images to the Transformer encoder, each image is split into a sequence of fixed-size non-overlapping patches,
51+
which are then linearly embedded. A [CLS] token is added to serve as representation of an entire image. The authors
52+
also add absolute position embeddings, and feed the resulting sequence of vectors to a standard Transformer encoder.
53+
The :class:`~transformers.CLIPFeatureExtractor` can be used to resize (or rescale) and normalize images for the model.
54+
55+
The :class:`~transformers.CLIPTokenizer` is used to encode the text. The :class:`~transformers.CLIPProcessor` wraps
56+
:class:`~transformers.CLIPFeatureExtractor` and :class:`~transformers.CLIPTokenizer` into a single instance to both
57+
encode the text and prepare the images. The following example shows how to get the image-text similarity scores using
58+
:class:`~transformers.CLIPProcessor` and :class:`~transformers.CLIPModel`.
59+
60+
61+
.. code-block::
62+
63+
>>> import torch
64+
>>> from PIL import Image
65+
>>> import requests
66+
67+
>>> from transformers import CLIPProcessor, CLIPModel
68+
69+
>>> model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
70+
>>> processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
71+
72+
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
73+
>>> image = Image.open(requests.get(url, stream=True).raw)
74+
75+
>>> inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
76+
77+
>>> outputs = model(**inputs)
78+
>>> logits_per_image = outputs.logits_per_image # this is the image-text similarity score
79+
>>> probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
80+
81+
82+
This model was contributed by `valhalla <https://huggingface.co/valhalla>`__. The original code can be found `here
83+
<https://github.com/openai/CLIP>`__.
84+
85+
CLIPConfig
86+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
87+
88+
.. autoclass:: transformers.CLIPConfig
89+
:members: from_text_vision_configs
90+
91+
92+
CLIPTextConfig
93+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
94+
95+
.. autoclass:: transformers.CLIPTextConfig
96+
:members:
97+
98+
99+
CLIPVisionConfig
100+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
101+
102+
.. autoclass:: transformers.CLIPVisionConfig
103+
:members:
104+
105+
106+
107+
CLIPTokenizer
108+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
109+
110+
.. autoclass:: transformers.CLIPTokenizer
111+
:members: build_inputs_with_special_tokens, get_special_tokens_mask,
112+
create_token_type_ids_from_sequences, save_vocabulary
113+
114+
CLIPTokenizerFast
115+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
116+
117+
.. autoclass:: transformers.CLIPTokenizerFast
118+
:members:
119+
120+
121+
CLIPFeatureExtractor
122+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
123+
124+
.. autoclass:: transformers.CLIPFeatureExtractor
125+
:members:
126+
127+
128+
CLIPProcessor
129+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
130+
131+
.. autoclass:: transformers.CLIPProcessor
132+
:members:
133+
134+
135+
136+
CLIPModel
137+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
138+
139+
.. autoclass:: transformers.CLIPModel
140+
:members: forward, get_text_features, get_image_features
141+
142+
143+
CLIPTextModel
144+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
145+
146+
.. autoclass:: transformers.CLIPTextModel
147+
:members: forward
148+
149+
150+
CLIPVisionModel
151+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
152+
153+
.. autoclass:: transformers.CLIPVisionModel
154+
:members: forward

src/transformers/__init__.py

Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -166,6 +166,13 @@
166166
"BlenderbotSmallTokenizer",
167167
],
168168
"models.camembert": ["CAMEMBERT_PRETRAINED_CONFIG_ARCHIVE_MAP", "CamembertConfig"],
169+
"models.clip": [
170+
"CLIP_PRETRAINED_CONFIG_ARCHIVE_MAP",
171+
"CLIPConfig",
172+
"CLIPTextConfig",
173+
"CLIPTokenizer",
174+
"CLIPVisionConfig",
175+
],
169176
"models.convbert": ["CONVBERT_PRETRAINED_CONFIG_ARCHIVE_MAP", "ConvBertConfig", "ConvBertTokenizer"],
170177
"models.cpm": ["CpmTokenizer"],
171178
"models.ctrl": ["CTRL_PRETRAINED_CONFIG_ARCHIVE_MAP", "CTRLConfig", "CTRLTokenizer"],
@@ -315,6 +322,7 @@
315322
# tokenizers-backed objects
316323
if is_tokenizers_available():
317324
# Fast tokenizers
325+
_import_structure["models.clip"].append("CLIPTokenizerFast")
318326
_import_structure["models.convbert"].append("ConvBertTokenizerFast")
319327
_import_structure["models.albert"].append("AlbertTokenizerFast")
320328
_import_structure["models.bart"].append("BartTokenizerFast")
@@ -390,6 +398,8 @@
390398
# Vision-specific objects
391399
if is_vision_available():
392400
_import_structure["image_utils"] = ["ImageFeatureExtractionMixin"]
401+
_import_structure["models.clip"].append("CLIPFeatureExtractor")
402+
_import_structure["models.clip"].append("CLIPProcessor")
393403
_import_structure["models.deit"].append("DeiTFeatureExtractor")
394404
_import_structure["models.vit"].append("ViTFeatureExtractor")
395405
else:
@@ -498,6 +508,7 @@
498508
"AutoModelWithLMHead",
499509
]
500510
)
511+
501512
_import_structure["models.bart"].extend(
502513
[
503514
"BART_PRETRAINED_MODEL_ARCHIVE_LIST",
@@ -588,6 +599,15 @@
588599
"CamembertModel",
589600
]
590601
)
602+
_import_structure["models.clip"].extend(
603+
[
604+
"CLIP_PRETRAINED_MODEL_ARCHIVE_LIST",
605+
"CLIPModel",
606+
"CLIPPreTrainedModel",
607+
"CLIPTextModel",
608+
"CLIPVisionModel",
609+
]
610+
)
591611
_import_structure["models.convbert"].extend(
592612
[
593613
"CONVBERT_PRETRAINED_MODEL_ARCHIVE_LIST",
@@ -1566,6 +1586,13 @@
15661586
BlenderbotSmallTokenizer,
15671587
)
15681588
from .models.camembert import CAMEMBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, CamembertConfig
1589+
from .models.clip import (
1590+
CLIP_PRETRAINED_CONFIG_ARCHIVE_MAP,
1591+
CLIPConfig,
1592+
CLIPTextConfig,
1593+
CLIPTokenizer,
1594+
CLIPVisionConfig,
1595+
)
15691596
from .models.convbert import CONVBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, ConvBertConfig, ConvBertTokenizer
15701597
from .models.cpm import CpmTokenizer
15711598
from .models.ctrl import CTRL_PRETRAINED_CONFIG_ARCHIVE_MAP, CTRLConfig, CTRLTokenizer
@@ -1715,6 +1742,7 @@
17151742
from .models.bert import BertTokenizerFast
17161743
from .models.big_bird import BigBirdTokenizerFast
17171744
from .models.camembert import CamembertTokenizerFast
1745+
from .models.clip import CLIPTokenizerFast
17181746
from .models.convbert import ConvBertTokenizerFast
17191747
from .models.deberta import DebertaTokenizerFast
17201748
from .models.distilbert import DistilBertTokenizerFast
@@ -1763,6 +1791,7 @@
17631791

17641792
if is_vision_available():
17651793
from .image_utils import ImageFeatureExtractionMixin
1794+
from .models.clip import CLIPFeatureExtractor, CLIPProcessor
17661795
from .models.deit import DeiTFeatureExtractor
17671796
from .models.vit import ViTFeatureExtractor
17681797
else:
@@ -1936,6 +1965,13 @@
19361965
CamembertForTokenClassification,
19371966
CamembertModel,
19381967
)
1968+
from .models.clip import (
1969+
CLIP_PRETRAINED_MODEL_ARCHIVE_LIST,
1970+
CLIPModel,
1971+
CLIPPreTrainedModel,
1972+
CLIPTextModel,
1973+
CLIPVisionModel,
1974+
)
19391975
from .models.convbert import (
19401976
CONVBERT_PRETRAINED_MODEL_ARCHIVE_LIST,
19411977
ConvBertForMaskedLM,

src/transformers/activations.py

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -52,6 +52,10 @@ def gelu_fast(x):
5252
return 0.5 * x * (1.0 + torch.tanh(x * 0.7978845608 * (1.0 + 0.044715 * x * x)))
5353

5454

55+
def quick_gelu(x):
56+
return x * torch.sigmoid(1.702 * x)
57+
58+
5559
def _silu_python(x):
5660
"""
5761
See Gaussian Error Linear Units (Hendrycks et al., https://arxiv.org/abs/1606.08415) where the SiLU (Sigmoid Linear
@@ -85,6 +89,7 @@ def linear_act(x):
8589
"tanh": torch.tanh,
8690
"gelu_new": gelu_new,
8791
"gelu_fast": gelu_fast,
92+
"quick_gelu": quick_gelu,
8893
"mish": mish,
8994
"linear": linear_act,
9095
"sigmoid": torch.sigmoid,

src/transformers/convert_slow_tokenizer.py

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -701,13 +701,37 @@ def post_processor(self):
701701
)
702702

703703

704+
class CLIPConverter(Converter):
705+
def converted(self) -> Tokenizer:
706+
vocab = self.original_tokenizer.encoder
707+
merges = list(self.original_tokenizer.bpe_ranks.keys())
708+
709+
tokenizer = Tokenizer(
710+
BPE(
711+
vocab=vocab,
712+
merges=merges,
713+
dropout=None,
714+
continuing_subword_prefix="",
715+
end_of_word_suffix="</w>",
716+
fuse_unk=False,
717+
)
718+
)
719+
720+
tokenizer.pre_tokenizer = pre_tokenizers.ByteLevel(add_prefix_space=self.original_tokenizer.add_prefix_space)
721+
tokenizer.decoder = decoders.ByteLevel()
722+
tokenizer.post_processor = processors.ByteLevel(trim_offsets=False)
723+
724+
return tokenizer
725+
726+
704727
SLOW_TO_FAST_CONVERTERS = {
705728
"AlbertTokenizer": AlbertConverter,
706729
"BartTokenizer": RobertaConverter,
707730
"BarthezTokenizer": BarthezConverter,
708731
"BertTokenizer": BertConverter,
709732
"BigBirdTokenizer": BigBirdConverter,
710733
"CamembertTokenizer": CamembertConverter,
734+
"CLIPTokenizer": CLIPConverter,
711735
"ConvBertTokenizer": BertConverter,
712736
"DebertaTokenizer": DebertaConverter,
713737
"DistilBertTokenizer": BertConverter,

src/transformers/models/__init__.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,7 @@
3030
blenderbot,
3131
blenderbot_small,
3232
camembert,
33+
clip,
3334
convbert,
3435
cpm,
3536
ctrl,

src/transformers/models/auto/configuration_auto.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -33,6 +33,7 @@
3333
BlenderbotSmallConfig,
3434
)
3535
from ..camembert.configuration_camembert import CAMEMBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, CamembertConfig
36+
from ..clip.configuration_clip import CLIP_PRETRAINED_CONFIG_ARCHIVE_MAP, CLIPConfig
3637
from ..convbert.configuration_convbert import CONVBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, ConvBertConfig
3738
from ..ctrl.configuration_ctrl import CTRL_PRETRAINED_CONFIG_ARCHIVE_MAP, CTRLConfig
3839
from ..deberta.configuration_deberta import DEBERTA_PRETRAINED_CONFIG_ARCHIVE_MAP, DebertaConfig
@@ -90,6 +91,7 @@
9091
(key, value)
9192
for pretrained_map in [
9293
# Add archive maps here
94+
CLIP_PRETRAINED_CONFIG_ARCHIVE_MAP,
9395
BIGBIRD_PEGASUS_PRETRAINED_CONFIG_ARCHIVE_MAP,
9496
DEIT_PRETRAINED_CONFIG_ARCHIVE_MAP,
9597
LUKE_PRETRAINED_CONFIG_ARCHIVE_MAP,
@@ -144,6 +146,7 @@
144146
CONFIG_MAPPING = OrderedDict(
145147
[
146148
# Add configs here
149+
("clip", CLIPConfig),
147150
("bigbird_pegasus", BigBirdPegasusConfig),
148151
("deit", DeiTConfig),
149152
("luke", LukeConfig),
@@ -204,6 +207,7 @@
204207
MODEL_NAMES_MAPPING = OrderedDict(
205208
[
206209
# Add full (and cased) model names here
210+
("clip", "CLIP"),
207211
("bigbird_pegasus", "BigBirdPegasus"),
208212
("deit", "DeiT"),
209213
("luke", "LUKE"),

src/transformers/models/auto/modeling_auto.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -81,6 +81,7 @@
8181
CamembertForTokenClassification,
8282
CamembertModel,
8383
)
84+
from ..clip.modeling_clip import CLIPModel
8485
from ..convbert.modeling_convbert import (
8586
ConvBertForMaskedLM,
8687
ConvBertForMultipleChoice,
@@ -299,6 +300,7 @@
299300
BlenderbotConfig,
300301
BlenderbotSmallConfig,
301302
CamembertConfig,
303+
CLIPConfig,
302304
ConvBertConfig,
303305
CTRLConfig,
304306
DebertaConfig,
@@ -352,6 +354,7 @@
352354
MODEL_MAPPING = OrderedDict(
353355
[
354356
# Base model mapping
357+
(CLIPConfig, CLIPModel),
355358
(BigBirdPegasusConfig, BigBirdPegasusModel),
356359
(DeiTConfig, DeiTModel),
357360
(LukeConfig, LukeModel),

0 commit comments

Comments
 (0)