-
Notifications
You must be signed in to change notification settings - Fork 30.9k
Closed
Description
System Info
transformersversion: 4.29.1- Platform: Windows-10
- Python version: 3.8.3
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
Who can help?
@sgugger @ArthurZucker @amyeroberts
Information
- The official example scripts
- My own modified scripts
Tasks
- An officially supported task in the
examplesfolder (such as GLUE/SQuAD, ...) - My own task or dataset (give details below)
Reproduction
Running:
import json
from transformers import AutoConfig
json_data = AutoConfig.from_pretrained('openai/clip-vit-base-patch16').to_dict()
json.dumps(json_data, indent=4)Results in
TypeError: Object of type dtype is not JSON serializable
I have identified this problem with the following models:
clipsamvision-encoder-decoder
Expected behavior
torch dtypes should be converted to a string. I believe this is due to these configs redefining their to_dict method, without calling dict_torch_dtype_to_str on the top-level object.
transformers/src/transformers/models/clip/configuration_clip.py
Lines 397 to 408 in de9255d
| def to_dict(self): | |
| """ | |
| Serializes this instance to a Python dictionary. Override the default [`~PretrainedConfig.to_dict`]. | |
| Returns: | |
| `Dict[str, any]`: Dictionary of all the attributes that make up this configuration instance, | |
| """ | |
| output = copy.deepcopy(self.__dict__) | |
| output["text_config"] = self.text_config.to_dict() | |
| output["vision_config"] = self.vision_config.to_dict() | |
| output["model_type"] = self.__class__.model_type | |
| return output |
Metadata
Metadata
Assignees
Labels
No labels