TY - GEN
T1 - SCART
T2 - 10th International Green and Sustainable Computing Conference, IGSC 2019
AU - Gajaria, Dhruv
AU - Kuan, Kyle
AU - Adegbija, Tosiron
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/10
Y1 - 2019/10
N2 - Prior studies have shown that the retention time of the non-volatile spin-transfer torque RAM (STT-RAM) can be relaxed in order to reduce STT-RAM's write energy and latency. However, since different applications may require different retention times, STT-RAM retention times must be critically explored to satisfy various applications' needs. This process can be challenging due to exploration overhead, and exacerbated by the fact that STT-RAM caches are emerging and are not readily available for design time exploration. This paper explores using known and easily obtainable statistics (e.g., SRAM statistics) to predict the appropriate STT-RAM retention times, in order to minimize exploration overhead. We propose an STT-RAM Cache Retention Time (SCART) model, which utilizes machine learning to enable design time or runtime prediction of right-provisioned STT-RAM retention times for latency or energy optimization. Experimental results show that, on average, SCART can reduce the latency and energy by 20.34% and 29.12%, respectively, compared to a homogeneous retention time while reducing the exploration overheads by 52.58% compared to prior work.
AB - Prior studies have shown that the retention time of the non-volatile spin-transfer torque RAM (STT-RAM) can be relaxed in order to reduce STT-RAM's write energy and latency. However, since different applications may require different retention times, STT-RAM retention times must be critically explored to satisfy various applications' needs. This process can be challenging due to exploration overhead, and exacerbated by the fact that STT-RAM caches are emerging and are not readily available for design time exploration. This paper explores using known and easily obtainable statistics (e.g., SRAM statistics) to predict the appropriate STT-RAM retention times, in order to minimize exploration overhead. We propose an STT-RAM Cache Retention Time (SCART) model, which utilizes machine learning to enable design time or runtime prediction of right-provisioned STT-RAM retention times for latency or energy optimization. Experimental results show that, on average, SCART can reduce the latency and energy by 20.34% and 29.12%, respectively, compared to a homogeneous retention time while reducing the exploration overheads by 52.58% compared to prior work.
KW - Spin-Transfer Torque RAM (STT-RAM) cache
KW - adaptable hardware
KW - configurable memory
KW - low-power embedded systems
KW - retention time.
UR - http://www.scopus.com/inward/record.url?scp=85079271059&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85079271059&partnerID=8YFLogxK
U2 - 10.1109/IGSC48788.2019.8957182
DO - 10.1109/IGSC48788.2019.8957182
M3 - Conference contribution
AN - SCOPUS:85079271059
T3 - 2019 10th International Green and Sustainable Computing Conference, IGSC 2019
BT - 2019 10th International Green and Sustainable Computing Conference, IGSC 2019
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 21 October 2019 through 24 October 2019
ER -