Skip to content

[E2E] Huggingface DebertaV2ForQuestionAnswering got fail_accuracy #1216

@mengfei25

Description

@mengfei25

🐛 Describe the bug

Failed dtype: float32, float16 and bfloat16. AMP passed
python benchmarks/dynamo/huggingface.py --accuracy --float32 -d xpu -n10 --training--only DebertaV2ForQuestionAnswering --backend=inductor
xpu train DebertaV2ForQuestionAnswering
E1220 16:43:35.601000 756971 site-packages/torch/_dynamo/utils.py:2307] RMSE (res-fp64): 0.53515, (ref-fp64): 0.01636 and shape=torch.Size([]). res.dtype: torch.float32, multiplier: 3.000000, tol: 0.010000, use_larger_multiplier_for_smaller_tensor: 0
fail_accuracy

Versions

env:
python: 3.10
XPU_OPS: 9ed0a1a
TRITON_COMMIT_ID: e98b6fcb8df5b44eb0d0addb6767c573d37ba024
TORCH_COMMIT_ID: 4f8b7c4272db521f7ffc4070ce1bdece513d1183
TRANSFORMERS_VERSION: 243e186efbf7fb93328dd6b34927a4e8c8f24395
DRIVER_VERSION: 1.23.10.49.231129.50
KERNEL_VERSION: 5.15.0-73-generic #80-Ubuntu SMP Mon May 15 15:18:26 UTC 2023
BUNDLE_VERSION: 2025.0.1.20241113
OS_PRETTY_NAME: Ubuntu 22.04.2 LTS
GCC_VERSION: 11

Metadata

Metadata

Type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions