Compile-Time Analysis of Explicitly Parallel Programs
Chow, Jyh-Herng
This item is only available for download by members of the University of Illinois community. Students, faculty, and staff at the U of I may log in with your NetID and password to view the item. If you are trying to access an Illinois-restricted dissertation or thesis, you can request a copy through your library's Inter-Library Loan office or purchase a copy directly from ProQuest.
Permalink
https://hdl.handle.net/2142/72090
Description
Title
Compile-Time Analysis of Explicitly Parallel Programs
Author(s)
Chow, Jyh-Herng
Issue Date
1993
Doctoral Committee Chair(s)
Harrison, Williams Ludwell, III,
Department of Study
Computer Science
Discipline
Computer Science
Degree Granting Institution
University of Illinois at Urbana-Champaign
Degree Name
Ph.D.
Degree Level
Dissertation
Keyword(s)
Computer Science
Abstract
Explicit parallelism not only complicates the semantics of a programming language, but also invalidates most current compiler techniques for program analyses and optimizations. The problem stems from the inadequacy of control flow graphs to describe information flows in a parallel program, and the insufficiency of data dependences and control dependences to describe the constraints for a correct execution. Because of the difficulties in analyzing explicitly parallel programs, most compilers simply side-step the problems either by restricting data sharing among concurrent activities or not optimizing parallel codes. Some even produce wrong codes that violate the program semantics.
This work proposes a general framework for analyzing parallel programs where concurrent activities interact through shared variables and presents the analyses of side effects, data dependences, object lifetimes, and concurrent expressions for a language with dynamic allocations, pointers, first-class functions, and cobegin parallelism.
This framework is based on abstract interpretation, a semantics-based analysis technique. We present a formal semantics of the language, based on a labeled transition system. Because every possible interaction between concurrent activities has to be considered, the state space of the analysis must be reduced. We approach this problem in two ways: eliminating redundant interleavings, which do not contribute to final results, and abstract interpretation, which provides systematic methods for combing related states. An iterative algorithm is presented for state space reduction in the presence of pointers and closures. Then, we present the abstract domains and the analyses under these abstract domains. We prove the correctness and termination of this abstract analysis. The results of the analyses can facilitate many applications, such as program optimization, memory management, and race detection. A graph representation of parallel programs useful for general compiler transformation is presented. Finally, implementation of this framework and its preliminary results are described.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.