Turning complex benchmark workflows into straightforward runs — that’s what we built MLCFlow and its automation for.
I co-develop the MLCFlow automation framework, streamlining benchmark submissions and reproducibility for both experienced participants and first-time users.
In pursuit of making MLPerf Inference and Automotive benchmarking reproducible and hassle-free, I:
- Develop and maintain the MLCFlow automation framework.
- Integrate new benchmarks into automation pipelines as they’re released.
- Incorporate submitters’ results post-submission for reproducibility testing.
- Maintain and enhance benchmark visualizers.
- Build test infrastructure to keep automation rock-solid as benchmarks evolve.