Withdraw
Loading…
DESIGN AND IMPLEMENTATION OF MLHARNESS
Chang, Yen-Hsiang
Loading…
Permalink
https://hdl.handle.net/2142/115016
Description
- Title
- DESIGN AND IMPLEMENTATION OF MLHARNESS
- Author(s)
- Chang, Yen-Hsiang
- Issue Date
- 2021-12
- Keyword(s)
- machine learning
- deep learning
- inference
- benchmarking
- Language
- en
- Abstract
- With society’s growing adoption of machine learning (ML) and deep learning (DL) for various intelligent solutions, it becomes increasingly imperative to standardize a common set of measures for ML/DL models with large scale open datasets under common development practices and resources, so that people can benchmark and compare models’ quality and performance on a common ground. MLCommons Inference has emerged recently as a driving force from both industry and academia to orchestrate such an effort. Despite its wide adoption as standardized benchmarks, MLCommons Inference has only included a limited number of ML/DL models (in fact seven models in total as of this thesis). This significantly limits the generality of MLCommons Inference because there are more and more novel ML/DL models from the research community, solving a wide range of problems with different modalities. To address such a limitation, we propose MLHarness, a scalable benchmarking harness system for MLCommons Inference with three distinctive features: (1) it codifies the standard benchmarking process as defined by MLCommons Inference including models, datasets, frameworks, and software and hardware systems; (2) it provides an easy and declarative approach for model developers to contribute their models and datasets to MLCommons Inference; and (3) it includes the support of a wide range of models with varying modalities so that we can scalably benchmark these models across different datasets, frameworks, and systems. This harness system is developed on top of MLModelScope, and is open sourced to the community. Our experiment results demonstrate the superior flexibility and scalability of this harness system for MLCommons Inference benchmarking.
- Graduation Semester
- 2022-10-15T16:31:38-05:00
- Type of Resource
- text
Owning Collections
Undergraduate Theses at Illinois PRIMARY
Manage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…