[V1][TPU] TPU multimodal model support #13496
                
     Closed
            
            
          
  Add this suggestion to a batch that can be applied as a single commit.
  This suggestion is invalid because no changes were made to the code.
  Suggestions cannot be applied while the pull request is closed.
  Suggestions cannot be applied while viewing a subset of changes.
  Only one suggestion per line can be applied in a batch.
  Add this suggestion to a batch that can be applied as a single commit.
  Applying suggestions on deleted lines is not supported.
  You must change the existing code in this line in order to create a valid suggestion.
  Outdated suggestions cannot be applied.
  This suggestion has been applied or marked resolved.
  Suggestions cannot be applied from pending reviews.
  Suggestions cannot be applied on multi-line comments.
  Suggestions cannot be applied while the pull request is queued to merge.
  Suggestion cannot be applied right now. Please check back later.
  
    
  
    
Now that #13049 has landed, this is an updated version of #12133
Currently only focused on usability and correctness for Llava-style multimodal models, not performance.
When using a multimodal model, we will pre-compile the prefills using the
inputs_embedsinput rather thaninput_ids. We will still useinput_idsfor decode in this iteration, but this will change with the addition of proper chunked prefill.This does not deal with pre-compiling the encoder forward pass, so in the event that the model is passed in image/video/audio that is a new shape, it will force compilation during runtime.
Tested Examples
Image
Llava ✅
Audio
Qwen2 Audio ✅