<P>Data compression plays a pivotal role in improving system performance and reducing energy consumption, because it increases the logical effective capacity of a compressedmemory system without physically increasing the memory size. However, da...
http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
https://www.riss.kr/link?id=A107503437
2017
-
SCIE,SCOPUS
학술저널
1-25(25쪽)
0
상세조회0
다운로드다국어 초록 (Multilingual Abstract)
<P>Data compression plays a pivotal role in improving system performance and reducing energy consumption, because it increases the logical effective capacity of a compressedmemory system without physically increasing the memory size. However, da...
<P>Data compression plays a pivotal role in improving system performance and reducing energy consumption, because it increases the logical effective capacity of a compressedmemory system without physically increasing the memory size. However, data compression techniques incur some cost, such as non-negligible compression and decompression overhead. This overhead becomes more severe if compression is used in the cache. In this article, we aim to minimize the read-hit decompression penalty in compressed Last-Level Caches (LLCs) by speculatively decompressing frequently used cachelines. To this end, we propose a Hot-cacheline Prediction and Early decompression (HoPE) mechanism that consists of three synergistic techniques: Hot-cacheline Prediction (HP), Early Decompression (ED), and Hit-history-based Insertion (HBI). HP and HBI efficiently identify the hot compressed cachelines, while ED selectively decompresses hot cachelines, based on their size information. Unlike previous approaches, the HoPE framework considers the performance balance/tradeoff between the increased effective cache capacity and the decompression penalty. To evaluate the effectiveness of the proposed HoPE mechanism, we run extensive simulations on memory traces obtained from multi-threaded benchmarks running on a full-system simulation framework. We observe significant performance improvements over compressed cache schemes employing the conventional Least-Recently Used (LRU) replacement policy, the Dynamic Re-Reference Interval Prediction (DRRIP) scheme, and the Effective Capacity Maximizer (ECM) compressed cache management mechanism. Specifically, HoPE exhibits system performance improvements of approximately 11%, on average, over LRU, 8% over DRRIP, and 7% over ECM by reducing the read-hit decompression penalty by around 65%, over a wide range of applications.</P>
Using CoreSight PTM to Integrate CRA Monitoring IPs in an ARM-Based SoC
TEI-power : Temperature Effect Inversion--Aware Dynamic Thermal Management