Dynamic Voltage and Frequency Scaling in NoCs with Supervised and Reinforcement Learning Techniques

Quintin Fettes, Mark Clark, Razvan Bunescu, Avinash Karanth, Ahmed Louri

Research output: Contribution to journalArticle

3 Scopus citations

Abstract

4Network-on-Chips (NoCs) are the de facto choice for designing the interconnect fabric in multicore chips due to their regularity, efficiency, simplicity, and scalability. However, NoC suffers from excessive static power and dynamic energy due to transistor leakage current and data movement between the cores and caches. Power consumption issues are only exacerbated by ever decreasing technology sizes. Dynamic Voltage and Frequency Scaling (DVFS) is one technique that seeks to reduce dynamic energy; however this often occurs at the expense of performance. In this paper, we propose LEAD Learning-enabled Energy-Aware Dynamic voltage/frequency scaling for multicore architectures using both supervised learning and reinforcement learning approaches. LEAD groups the router and its outgoing links into the same V/F domain and implements proactive DVFS mode management strategies that rely on offline trained machine learning models in order to provide optimal V/F mode selection between different voltage/frequency pairs. We present three supervised learning versions of LEAD that are based on buffer utilization, change in buffer utilization and change in energy/throughput, which allow proactive mode selection based on accurate prediction of future network parameters. We then describe a reinforcement learning approach to LEAD that optimizes the DVFS mode selection directly, obviating the need for label and threshold engineering. Simulation results using PARSEC and Splash-2 benchmarks on a <formula><tex>$4 \times 4$</tex></formula> concentrated mesh architecture show that by using supervised learning LEAD can achieve an average dynamic energy savings of 15.4% for a loss in throughput of 0.8% with no significant impact on latency. When reinforcement learning is used, LEAD increases average dynamic energy savings to 20.3% at the cost of a 1.5% decrease in throughput and a 1.7% increase in latency. Overall, the more flexible reinforcement learning approach enables learning an optimal behavior for a wider range of load environments under any desired energy vs. throughput tradeoff.

Original languageEnglish (US)
JournalIEEE Transactions on Computers
DOIs
Publication statusAccepted/In press - Jan 1 2018
Externally publishedYes

    Fingerprint

Keywords

  • Dynamic Voltage and Frequency Scaling (DVFS)
  • Machine Learning (ML)
  • Reinforcement Learning
  • Ridge Regression

ASJC Scopus subject areas

  • Software
  • Theoretical Computer Science
  • Hardware and Architecture
  • Computational Theory and Mathematics

Cite this