Withdraw
Loading…
Modeling of deep in-memory architechture (DIMA) IC for DNNs
Ozyapici, Alkin
Loading…
Permalink
https://hdl.handle.net/2142/110307
Description
- Title
- Modeling of deep in-memory architechture (DIMA) IC for DNNs
- Author(s)
- Ozyapici, Alkin
- Contributor(s)
- Shanbhag, Naresh
- Issue Date
- 2021-05
- Keyword(s)
- in-memory processing
- machine learning
- accelerator
- modeling
- PyTorch
- Abstract
- The Deep In-Memory Architecture IC from Prof Shnabahg’s group is studied in thesis. DIMA embeds dot product computation in the periphery of the memory and as a result lead to energy and latency cost savings compared to conventional architectures. However, DIMA suffers from accuracy drops due to manufacturing and architecture related non-idealities. A functional model of the DIMA IC was Implemented in Python utilizing analytical models to simulate the effects of threshold voltage variations of a 6T SRAM cell and other non-idealities in order to simulate the accuracy of Deep Neural Networks (DNNs) on DIMA. Two DNNs are considered: VGG11 and Resnet20. The DNNs were tested on the CIFAR10 and CIFAR100 datasets to see the effects of different classification tasks. While the floating-point accuracy for VGG11 and Resnet20 with CIFAR10 dataset was 92.12% and 91.20% respectively, when the networks were mapped on DIMA the accuracy dropped to 87.03% and 77.42% respectively. It can be seen from the results that DNNs suffer accuracy loss when mapped on DIMA due to quantization and additional nonidealities of the architecture. To compensate for the circuit specific noises the studied DIMA IC also includes an on-chip trainer. On-chip training was simulated by retraining the networks after they were mapped on DIMA. After on-chip training, it was seen that DIMA can achieve comparable if not better accuracies than the baseline 6-bit fixed-point accuracy with both CIFAR-10 and CIFAR-100 datasets for both DNNs. It was also observed that the accuracy drop-off was more significant when the networks were simulated on DIMA with CIFAR100 dataset, from 68.74% to 46.37% for VGG11 before on-chip training. It can be seen that the noise related to DIMA effects the complex 100-class task of CIFAR100 dataset more significantly than the 10-class classification in CIFAR10.
- Type of Resource
- text
- Language
- en
- Permalink
- http://hdl.handle.net/2142/110307
Owning Collections
Senior Theses - Electrical and Computer Engineering PRIMARY
The best of ECE undergraduate researchManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…