Quantcast

Cache

A cache is a computer part designed to improve performance by storing data in a way that leaves room for future data to be stored. Such data may include values that were stored elsewhere, making it quicker and easier to find. Without caches, the data must be taken from its original location, which takes significantly more time. The more requests the cache can take out, the better the overall computer performance.

While buffers store data that can be outwardly managed by a user, caches store data to where users are not aware of the actual cache even existing. In general, caches are very small yet effective.

Caches consist mainly of groups of entries, each of which has a set of copied data. When the data needs to be accessed, the cache is checked for a matching tag. If one is found, that data is used instead, known as a cache hit. The number of accesses that result in successful cache hits is known as the hit rate of the cache. When a matching tag is not found, it is referred to as a cache miss. When this happens, the data is usually copied to the cache for next time as well as written to the backing store. The timing is known as the write policy. In a write-through cache, when data is written to the cache, data is written to the backing store simultaneously. In a write-back cache, writes are tracked by location and marks them as dirty instead of writing them immediately to the backing store. Eventually they are written back, known as a lazy write. No-write allocation is a policy which caches only processor reads, avoiding the need for write-back when the old value of the data was previously absent from the cache.

Units other than the cache may change the data in the backing store, thus making the copy in the cache stale. However, when the data in the cache is updated, copies of that data in other caches will become stale. Communication protocols between the cache managers which keep the data consistent are known as coherency protocols.

Web caches are used to store previous responses from web servers, such as web pages. Web caches can reduce the amount of information needed across the network because it can be reused, thus decreasing bandwidth and processing requirements of the web server. Web browsers usually employ a built-in web cache, but some internet service providers use a caching proxy server, which is a web cache that is shared among the network.

The terms buffer and cache may be frequently combined; however, there is a difference in intent. A buffer is a temporary memory location used when CPU instructions cannot directly address data stored in peripheral devices. Buffers like these are practical when a large block of data is created or when it is produced and delivered in different orders. In addition, buffers full of data are generally transferred sequentially, thus increasing performance when buffering itself. A cache also increases transfer performance. Part of this is due to the possibility that multiple small transfers will come together into one large block. However, the main performance-gain is due to the probability that the data will be read more than once. The purpose of a cache is to reduce accesses to the underlying slower storage.

Photo Copyright and Credit

Cache


comments powered by Disqus