Withdraw
Loading…
Neural abstract interpretation: Leveraging neural networks for automated, efficient and differentiable abstract interpretation
Gomber, Shaurya
Loading…
Permalink
https://hdl.handle.net/2142/124424
Description
- Title
- Neural abstract interpretation: Leveraging neural networks for automated, efficient and differentiable abstract interpretation
- Author(s)
- Gomber, Shaurya
- Issue Date
- 2024-04-30
- Director of Research (if dissertation) or Advisor (if thesis)
- Singh, Gagandeep
- Department of Study
- Computer Science
- Discipline
- Computer Science
- Degree Granting Institution
- University of Illinois at Urbana-Champaign
- Degree Name
- M.S.
- Degree Level
- Thesis
- Keyword(s)
- Abstract Interpretation
- Program Analysis
- Differentiable Program Analysis
- Abstract
- Abstract Interpretation is a popular technique for formally analyzing the properties of programs, neural networks, and complex real-world systems. However, designing efficient abstract transformers for expressive relational domains such as Octagon and Polyhedra is hard as one needs to carefully balance the fundamental tradeoff between the cost, soundness, and precision of the transformer for downstream tasks. Further, scalable implementations involve intricate performance optimizations like Octagon and Polyhedra decomposition. This motivates the need for the automatic generation of efficient, sound, and precise abstract transformers. Given the inherent complexity of abstract transformers and the proven capability of neural networks to effectively approximate complex functions, this thesis envisions and proposes the concept of Neural Abstract Transformers: neural networks that serve as abstract transformers. The Neural Abstract Interpretation (NeurAbs) framework introduced in this thesis provides supervised and unsupervised methods to learn efficient neural transformers automatically, which reduces development costs. These neural transformers can then act as a fast and sometimes even more precise replacement for slow and imprecise hand-crafted transformers. Additionally, these neural transformers are differentiable as opposed to the hand-crafted ones. This enables differentiable abstract interpretation and allows for the use of gradient-guided learning methods to solve problems that can be posed as learning tasks. We instantiate the NeurAbs framework for two widely used numerical domains: Interval and Octagon. Evaluations on these domains demonstrate the effectiveness of the NeurAbs framework to learn sound and precise neural transformers. We further demonstrate the advantages of differentiability of neural transformers by formulating the task of invariant generation as a learning problem and then using the learned neural transformers in the Octagon domain to generate valid octagonal invariants.
- Graduation Semester
- 2024-05
- Type of Resource
- Thesis
- Copyright and License Information
- Copyright 2024 Shaurya Gomber
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…