Withdraw
Loading…
DiMO: Differentiable Model Optimization and metaDiMO
Brando Miranda
Loading…
Permalink
https://hdl.handle.net/2142/109085
Description
- Title
- DiMO: Differentiable Model Optimization and metaDiMO
- Author(s)
- Brando Miranda
- Issue Date
- 2019-08-01
- Keyword(s)
- meta-learning
- NAS
- architecture search
- machine learning
- Abstract
- We propose an AutoML paradigm that is end-to-end differentiable, versatile, scalable and by design optimizes to transfer to different data sets. The proposed method is scalable because it's fully differentiable and therefore trainable with Gradient Descent (GD). Consequently it does not rely on any Reinforcement Learning (RL) or Evolutionary methods. The proposed method is versatile because every part of the system can be easily learned or constrained with appropriate priors (e.g. biologically plausible priors). The pipeline is highly modular; the architecture search and its hyper parameters, optimizer and how credit is assigned all the way to performance predictor are all separate modules. This gives rise to rich combinations to experiment, deploy, train and constrain the system. Last but not least, the paradigm is formulated to consider multiple data sets at once and thus every module that has been learned is optimized to transfer to different tasks. Additionally, with a set of unknown data sets as part of the validation process, the system could potentially be trained to generalize to data sets it has not seen before.
- Type of Resource
- text
- Language
- en
- Permalink
- http://hdl.handle.net/2142/109085
Owning Collections
Manage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…