Skip to content

Different tokenizer silently being loaded based on trust_remote_code #34882

@DarkLight1337

Description

@DarkLight1337

System Info

python==3.9.20
transformers==4.46.2

Who can help?

@ArthurZucker

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

I'm trying to use Alibaba-NLP/gte-Qwen2-1.5B-instruct in vLLM, but found that the tokenizer that is loaded from HF Transformers has different padding_side based on the trust_remote_code setting:

>>> from transformers import AutoTokenizer
>>> AutoTokenizer.from_pretrained("Alibaba-NLP/gte-Qwen2-1.5B-instruct", trust_remote_code=False)
Qwen2TokenizerFast(name_or_path='Alibaba-NLP/gte-Qwen2-1.5B-instruct', vocab_size=151643, model_max_length=32768, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|endoftext|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>']}, clean_up_tokenization_spaces=False),  added_tokens_decoder={
        151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
        151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
        151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
}
>>> AutoTokenizer.from_pretrained("Alibaba-NLP/gte-Qwen2-1.5B-instruct", trust_remote_code=True)
Qwen2TokenizerFast(name_or_path='Alibaba-NLP/gte-Qwen2-1.5B-instruct', vocab_size=151643, model_max_length=32768, is_fast=True, padding_side='left', truncation_side='right', special_tokens={'eos_token': '<|endoftext|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>']}, clean_up_tokenization_spaces=False),  added_tokens_decoder={
        151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
        151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
        151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
}

This setting significantly impacts the output of the embedding model. Is this intended behavior? I suspect no error is being raised because the name of the custom tokenizer in this HF repo (see relevant file) is Qwen2TokenizerFast which is also defined in core Transformers.

Expected behavior

To avoid accidentally loading the wrong tokenizer, the code should raise an error if a custom tokenizer is defined in the HF repo but trust_remote_code=False, regardless of whether there is a tokenizer with the same name defined inside core Transformers.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions