Where is cache memory logically positioned




















All rights reserved 2 A five-level memory hierarchy. All rights reserved 3 A portion of a disk track. All rights reserved 4 A disk with four platters.

All rights reserved 5 A disk with five zones. Computer Engineering Dept. Chapter About project SlidePlayer Terms of Service. Feedback Privacy Policy Feedback. To make this website work, we log user data and share it with processors. To use this website, you must agree to our Privacy Policy , including cookie policy. I agree. Start Now.

A cache memory is used by the central processing unit of a computer to reduce the average cost time or energy to access data from the main memory. Generally Cache memory is logically positioned between CPU and main memory. Get Started for Free Download App. More Digital Electronics Questions Q1. The digital equivalent of an electric series circuit is the:.

Sign up for our Newsletter! Mobile Newsletter banner close. Mobile Newsletter chat close. Mobile Newsletter chat dots. Mobile Newsletter chat avatar. Mobile Newsletter chat subscribe. Prev NEXT. Computer Hardware. By: Jeff Tyson. Read More. Normal memory would be searched using a standard search algorithm, as learned in beginning programming classes.

If the memory is unordered, it would take on average searches to find an item. If the memory is ordered, binary search would find it in 8 searches. Associative memory would find the item in one search.

If one of the memory cells has the value, it raises a Boolean flag and the item is found. We do not consider duplicate entries in the associative memory. This can be handled by some rather straightforward circuitry, but is not done in associative caches.

Associative Cache. We now focus on cache memory, returning to virtual memory only at the end. Assume a number of cache lines, each holding 16 bytes. Assume a 24—bit address. The simplest arrangement is an associative cache. It is also the hardest to implement. Divide the 24—bit address into two parts: a 20—bit tag and a 4—bit offset.

A cache line in this arrangement would have the following format. D bit. V Bit. The placement of the 16 byte block of memory into the cache would be determined by a cache line replacement policy. The policy would probably be as follows: 1. Such a cache line can be overwritten without first copying its contents back to main memory.

Direct—Mapped Cache. This is simplest to implement, as the cache line index is determined by the address. Assume cache lines, each holding 16 bytes. Divide the 24—bit address into three fields: a 12—bit explicit tag, an 8—bit line number, and a 4—bit offset within the cache line. Note that the 20—bit memory tag is divided between the 12—bit cache tag and 8—bit line number. Consider the address 0xAB The cache line would also have a V bit and a D bit Valid and Dirty bits.

This simple implementation often works, but it is a bit rigid. An design that is a blend of the associative cache and the direct mapped cache might be useful. Set—Associative Caches. An N—way set—associative cache uses direct mapping, but allows a set of N memory blocks to be stored in the line. This allows some of the flexibility of a fully associative cache, without the complexity of a large associative memory for searching the cache. Suppose a 2—way set—associative implementation of the same cache memory.

Again assume cache lines, each holding 16 bytes. Consider addresses 0xCD and 0xAB Each would be stored in cache line 0x Set 0 of this cache line would have one block, and set 1 would have the other. Entry 0. Entry 1. Virtual Memory Again. Suppose we want to support 32—bit logical addresses in a system in which physical memory is 24—bit addressable.

We shall see this again, when we study virtual memory in a later lecture. For now, we just note that the address structure of the disk determines the structure of virtual memory.

Each disk stores data in blocks of bytes, called sectors. In some older disks, it is not possible to address each sector directly. This is due to the limitations of older file organization schemes, such as FAT— FAT—16 used a 16—bit addressing scheme for disk access. Thus 2 16 sectors could be addressed. To allow for larger disks, it was decided that a cluster of 2 K sectors would be the smallest addressable unit.

Thus one would get clusters of 1, bytes, 2, bytes, etc. Virtual memory transfers data in units of clusters, the size of which is system dependent. Examples of Cache Memory. We need to review cache memory and work some specific examples. The idea is simple, but fairly abstract. We must make it clear and obvious. While most of this discussion does apply to pages in a Virtual Memory system , we shall focus it on cache memory.

To review, we consider the main memory of a computer. In general, the N—bit address is broken into two parts, a block tag and an offset. The most significant N — K bits of the address are the block tag The least significant K bits represent the offset within the block. We use a specific example for clarity. Remember that our cache examples use byte addressing for simplicity. In our example, the address layout for main memory is as follows:.

So, the tag field for this block contains the value 0xAB The tag field of the cache line must also contain this value, either explicitly or implicitly. More on this later.



0コメント

  • 1000 / 1000