Recent emerging machine learning applications such as Internet-of-Things and medical devices require to be operated in a battery-powered platform. As the machine learning algorithms involve heavy data-intensive computations, interest in energy-efficient and low-delay machine learning accelerators is growing. Because there is a trade-off between energy and accuracy in machine learning applications, it is a reasonable direction to provide scalable architecture which has diverse operating points.
This thesis presents a high-accuracy in-memory realization of the AdaBoost machine learning classifier. The proposed classifier employs a deep in-memory architecture (DIMA), and employs foreground calibration to compensate for PVT variations and improve task-level accuracy. The proposed architecture switches between a high accuracy/high power (HA) mode and a low power/low accuracy (LP) mode via soft decision thresholding to provide an elegant energy-accuracy trade-off. The proposed realization achieves an EDP reduction of 43X over a digital architecture at an iso-accuracy of 95% for the MNIST dataset, which is an improvement of 5% over a previous in-memory implementation of AdaBoost.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.