|
In computing, cache algorithms (also frequently called cache replacement algorithms or cache replacement policies) are optimizing instructionsor algorithmsthat a computer program or a hardware-maintained structure can follow in order to manage a cache of information stored on the computer. When the cache is full, the algorithm must choose which items to discard to make room for the new ones. == Overview == The average memory reference time is〔 : where : = average memory reference time : = miss ratio = 1 - (hit ratio) : = time to make a main memory access when there is a miss (or, with multi-level cache, average memory reference time for the next-lower cache) : = the latency: the time to reference the cache when there is a hit : = various secondary effects, such as queuing effects in multiprocessor systems There are two primary figures of merit of a cache: The latency, and the hit rate. There are also a number of secondary factors affecting cache performance.〔 Alan Jay Smith. "Design of CPU Cache Memories". Proc. IEEE TENCON, 1987. () 〕 The "hit ratio" of a cache describes how often a searched-for item is actually found in the cache. More efficient replacement policies keep track of more usage information in order to improve the hit rate (for a given cache size). The "latency" of a cache describes how long after requesting a desired item the cache can return that item (when there is a hit). Faster replacement strategies typically keep track of less usage information—or, in the case of direct-mapped cache, no information—to reduce the amount of time required to update that information. Each replacement strategy is a compromise between hit rate and latency. Measurements of "the hit ratio" are typically performed on benchmark applications. The actual hit ratio varies widely from one application to another. In particular, video and audio streaming applications often have a hit ratio close to zero, because each bit of data in the stream is read once for the first time (a compulsory miss), used, and then never read or written again. Even worse, many cache algorithms (in particular, LRU) allow this streaming data to fill the cache, pushing out of the cache information that will be used again soon (cache pollution).〔 Paul V. Bolotoff. ("Functional Principles of Cache Memory" ). 2007. 〕 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Cache algorithms」の詳細全文を読む スポンサード リンク
|