Shared Cache Organization for Multiple-Stream Computer Systems
Yeh, Chi-Chung
This item is only available for download by members of the University of Illinois community. Students, faculty, and staff at the U of I may log in with your NetID and password to view the item. If you are trying to access an Illinois-restricted dissertation or thesis, you can request a copy through your library's Inter-Library Loan office or purchase a copy directly from ProQuest.
Permalink
https://hdl.handle.net/2142/66255
Description
Title
Shared Cache Organization for Multiple-Stream Computer Systems
Author(s)
Yeh, Chi-Chung
Issue Date
1981
Department of Study
Electrical Engineering
Discipline
Electrical Engineering
Degree Granting Institution
University of Illinois at Urbana-Champaign
Degree Name
Ph.D.
Degree Level
Dissertation
Keyword(s)
Engineering, Electronics and Electrical
Language
eng
Abstract
Organizations of shared two-level memory hierarchies for parallel-pipelined multiple instruction stream processors are studied. The multicopy of data problems are totally eliminated by sharing the caches. All memory modules are assumed to be identical and cache addresses are interleaved by sets. For a parallel-pipelined processor of order (s,p), which consists of p parallel processors each of which is a pipelined processor with degree of multiprogramming, s, there can be up to sp cache requests from distinct instruction streams in each instruction cycle. The cache memory interference and shared cache hit ratio in such systems are investigated.
The study shows that the set associative mapping mechanism, the write through with buffering updating scheme and the no write allocation block fetch strategy are suitable for shared cache systems. However, for private cache systems, the write back with buffering updating scheme and the write allocation block fetch strategy are considered in this thesis.
Performance analysis is carried out by using discrete Markov Chain and probability based theorems. Performance is evaluated as a function of the hit ratio, h, the processor order, (s,p), and the cache organization characterized by the number of lines, l, the number of modules per line, m, cache cycle time, c, and the block transfer time, T. Results shows that for reasonably large l high performance can be obtained for shared cache with small (1-h)T. Shared-cache systems may perform better than private-cache systems if shared cache results in a higher hit ratio than private cache. The shared-cache memory organization is suitable for single pipelined processor systems because of the low access interference. Access interference of shared cache systems may be reduced to extremely low levels with a reasonable choice of system parameters.
Some design tradeoffs are discussed and examples are given to illustrate a wide variety of design options that can be obtained. Performance differences due to alternative architectures are also shown by a performance comparison between shared cache and private cache for a wide range of parameters.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.