Computer Science Question

  • 1) You have been asked to investigate the relative performance of a banked versus pipelined L1 data cache for a new microprocessor. Assume a 64 KB two-way set associative cache with 64-byte blocks. The pipelined cache would consist of three pipe stages, similar in capacity to the Alpha 21264 data cache. A banked implementation would consist of two 32 KB two-way set associative banks. Use CACTI and assume a 65 nm (0.065 m) technology to answer the following questions. The cycle time output in the web version shows at what frequency a cache can operate without any bubbles in the pipeline.What is the cycle time of the cache in comparison to its access time, and how many pipe stages will the cache take up (to two decimal places)?Compare the area and total dynamic read energy per access of the pipelined design versus the banked design. State which takes up less area and which requires more power and explain why that might be.2) A cache acts as a filter. For example, for every 1000 instructions of a program, an average of 20 memory accesses may exhibit low enough locality that they cannot be serviced by a 2 MB cache. The 2 MB cache is said to have an MPKI (misses per thousand instructions) of 20, and this will be largely true regardless of the smaller caches that precede the 2 MB cache. Assume the following cache/latency/MPKI values: 32 KB/1/100, 128 KB/2/80, 512 KB/4/50, 2 MB/8/40, 8 MB/16/10. Assume that accessing the off-chip memory system requires 200 cycles on average. For the following cache configurations, calculate the average time spent accessing the cache hierarchy. What do you observe about the downsides of a cache hierarchy that is too shallow or too deep?a. 32 KB L1; 8 MB L2; off-chip memoryb. 32 KB L1; 512 KB L2; 8 MB L3; off-chip memoryc. 32 KB L1; 128 KB L2; 2 MB L3; 8 MB L4; off-chip memory
  • 3) You are designing a PMD and optimizing it for low energy. The core, including an 8 KB L1 data cache, consumes 1 W whenever it is not in hibernation. If the core has a perfect L1 cache hit rate, it achieves an average CPI of 1 for a given task, that is, 1000 cycles to execute 1000 instructions. Each additional cycle accessing the L2 and beyond adds a stall cycle for the core. Based on the following specifications, what is the size of L2 cache that achieves the lowest energy for the PMD (core, L1, L2, memory) for that given task?
  • The core frequency is 1 GHz, and the L1 has an MPKI of 100.
  • A 256KB L2 has a latency of 10cycles, an MPKI of 20, a background power of 0.2 W, and each L2 access consumes 0.5 nJ.
  • A 1MB L2 has a latency of 20 cycles, an MPKI of 10, a background power of 0.8 W, and each L2 access consumes 0.7 nJ.
  • The memory system has an average latency of 100 cycles, a background power of 0.5 W, and each memory access consumes 35 nJ.
  • 4) The ways of a set can be viewed as a priority list, ordered from high priority to low priority. Every time the set is touched, the list can be reorganized to change block priorities. With this view, cache management policies can be decomposed into three sub-policies: Insertion, Promotion, and Victim Selection. Insertion defines where newly fetched blocks are placed in the priority list. Promotion defines how a block’s position in the list is changed every time it is touched (a cache hit). Victim Selection defines which entry of the list is evicted to make room for a new block when there is a cache miss.

a. Can you frame the LRU cache policy in terms of the Insertion, Promotion, and Victim Selection sub-policies?

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

b. Can you define other Insertion and Promotion policies that may be competitive and worth exploring further?

  • 5) You are trying to appreciate how important the principle of locality is in justifying the use of a cache memory, so you experiment with a computer having an L1 data cache and a main memory (you exclusively focus on data accesses). The latencies (in CPU cycles) of the different kinds of accesses are as follows: cache hit, 1 cycle; cache miss, 110 cycles; main memory access with cache disabled, 105 cycles.When you run a program with an overall miss rate of 3%, what will the average memory access time (in CPU cycles) be?Next, you run a program specifically designed to produce completely random data addresses with no locality. Toward that end, you use an array of size 1 GB (all of which fits in the main memory). Accesses to random elements of this array are continuously made (using a uniform random number generator to generate the elements indices). If your data cache size is 64 KB, what will the average memory access time be?If you compare the result obtained in part (b) with the main memory access time when the cache is disabled, what can you conclude about the role of the principle of locality in justifying the use of cache memory?You observed that a cache hit produces a gain of 104 cycles(1 cycle vs. 105), but it produces a loss of 5 cycles in the case of a miss (110 cycles vs. 105). In the general case, we can express these two quantities as G (gain) and L (loss). Using these two quantities (G and L), identify the highest miss rate after which the cache use would be disadvantageous.Instructions:
  • 1. The answers should be detailed enough and as per what is asked in the question.
  • 2. Appropriate reasoning and steps should be mentioned for questions involving computations.
  • 3. Answers to the questions should be original. Since they can be easily found online so care must be taken to avoid plagiarism.
  • 4. AI detection feature will be turned on while submission so care must be taken.
  • 5. Document should be in APA format.

Still stressed with your coursework?
Get quality coursework help from an expert!