I have updated the transformers package and I am using ViLT model: https://huggingface.co/docs/transformers/model_doc/vilt#transformers.ViltForQuestionAnswering

I am getting this error is load_in_8bit is not integrated will all hugging face models ? Could you please let me know how to use load_in_8bit for any huggingface model not just BLOOM and T5.
