-
Notifications
You must be signed in to change notification settings - Fork 31.2k
Description
System Info
transformersversion: 4.46.2- Platform: Linux-5.4.0-1134-aws-x86_64-with-glibc2.31
- Python version: 3.10.2
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4.5
- Accelerate version: 1.1.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: Yes
- GPU type: Tesla T4
Who can help?
@amyeroberts , @quvb
Information
- The official example scripts
- My own modified scripts
Tasks
- An officially supported task in the
examplesfolder (such as GLUE/SQuAD, ...) - My own task or dataset (give details below)
Reproduction
Code to reproduce:
from PIL import Image
img1=Image.open('Image1.JPG')
img2=Image.open('Image2.JPG')
prompt = processor.apply_chat_template(messages, add_generation_prompt=True)
inputs = processor(text=prompt, images=[img1,img2], return_tensors="pt")
inputs = {k: v.to(DEVICE) for k, v in inputs.items()}
Generate
generated_ids = model.generate(**inputs, max_new_tokens=512)
generated_texts = processor.batch_decode(generated_ids, skip_special_tokens=True)
print(generated_texts)
IndexError Traceback (most recent call last)
Cell In[4], line 6
3 img2=Image.open('Image2.JPG')
5 prompt = processor.apply_chat_template(messages, add_generation_prompt=True)
----> 6 inputs = processor(text=[prompt,prompt], images=[img1,img2], return_tensors="pt")
7 inputs = {k: v.to(DEVICE) for k, v in inputs.items()}
9 # Generate
File ~/envs/default/lib/python3.10/site-packages/transformers/models/idefics3/processing_idefics3.py:302, in Idefics3Processor.call(self, images, text, audio, videos, image_seq_len, **kwargs)
300 sample = split_sample[0]
301 for i, image_prompt_string in enumerate(image_prompt_strings):
--> 302 sample += image_prompt_string + split_sample[i + 1]
303 prompt_strings.append(sample)
305 text_inputs = self.tokenizer(text=prompt_strings, **output_kwargs["text_kwargs"])
IndexError: list index out of range
Expected behavior
I would expect Model to take 2 images in the input and provide generation using these 2 images as context.