Self-scheduling, data synchronization and program transformation for multiprocessor systems
Tang, Peiyi
This item is only available for download by members of the University of Illinois community. Students, faculty, and staff at the U of I may log in with your NetID and password to view the item. If you are trying to access an Illinois-restricted dissertation or thesis, you can request a copy through your library's Inter-Library Loan office or purchase a copy directly from ProQuest.
Permalink
https://hdl.handle.net/2142/21915
Description
Title
Self-scheduling, data synchronization and program transformation for multiprocessor systems
Author(s)
Tang, Peiyi
Issue Date
1989
Doctoral Committee Chair(s)
Yew, Pen-Chung
Department of Study
Computer Science
Discipline
Computer Science
Degree Granting Institution
University of Illinois at Urbana-Champaign
Degree Name
Ph.D.
Degree Level
Dissertation
Keyword(s)
Computer Science
Language
eng
Abstract
The limitation of vector supercomputing and of device speed has led to the development of multiprocessor supercomputers. Although large tightly-coupled shared-memory multiprocessor systems have become feasible, such systems cannot be considered successful, unless their numerous processors can coordinate with each other efficiently for a wide range of applications. Advanced software techniques and their supports from system architectures are keys to the success of modern multiprocessor supercomputers.
This dissertation first concentrates on two important software issues for large multiprocessor systems: processor scheduling and data synchronization.
Self-scheduling, an efficient dynamic heuristic scheduling, is a practical solution to the scheduling problem of multiprocessor systems. We propose several self-scheduling schemes which can be used by a compiler to generate self-scheduling object codes for parallel programs. Since busy-waiting is used to enforce cross-iteration data dependences, deadlocks in self-scheduling are possible. We identify the conditions that allow deadlock-free self-scheduling for different self-scheduling models and propose the use of an appropriate scheduling order for allocating processors to prevent deadlocks. Self-scheduling order also has significant impact on the performance of parallel loops with cross-iteration data dependences. We propose the shortest-delay self-scheduling (SDSS) order based on Doacross delays determined by cross-iteration data dependences. We show by simulation that SDSS can offer near-optimal performance in most cases. A compile-time program transformation for SDSS is also presented.
Data synchronization is necessary to enforce cross-iteration data data dependences. We propose a set of data-level synchronization instructions to support data synchronization. Compiler algorithms for generating these data-level synchronization instructions for different types of subscript functions are presented.
"The last part of the dissertation addresses an architecture issue of large multiprocessor systems: ""hot-spot"" contention. We suggest the use of software combining to distribute ""hot-spot"" addressings. A number of software combining algorithms for different access patterns, such as barrier synchronization, fetch-and-add type of operations and semaphore P/V operations, are presented."
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.