Withdraw
Loading…
Towards efficient algorithms and systems for tensor decompositions and tensor networks
Ma, Linjian
Loading…
Permalink
https://hdl.handle.net/2142/121946
Description
- Title
- Towards efficient algorithms and systems for tensor decompositions and tensor networks
- Author(s)
- Ma, Linjian
- Issue Date
- 2023-09-28
- Director of Research (if dissertation) or Advisor (if thesis)
- Solomonik, Edgar
- Doctoral Committee Chair(s)
- Solomonik, Edgar
- Committee Member(s)
- Chekuri, Chandra
- Olson, Luke
- Stoudenmire, Edwin M
- Department of Study
- Computer Science
- Discipline
- Computer Science
- Degree Granting Institution
- University of Illinois at Urbana-Champaign
- Degree Name
- Ph.D.
- Degree Level
- Dissertation
- Keyword(s)
- tensor
- tensor decomposition
- tensor network
- tensor contraction
- randomized algorithm
- parallel algorithm
- quantum computing
- Abstract
- Tensors, which are multidimensional arrays generalizing vectors and matrices, play an important role across various domains, including signal processing, machine learning, and computational physics. Despite their widespread applications, computing with high-dimensional tensors poses a challenge known as the “curse of dimensionality”. Tensor decomposition provides a pathway to solving this challenge by representing or approximating high dimensional tensors in the form of tensor networks. These networks consist of interconnected small tensors, forming a specific graph structure that represents their contraction relationships. This thesis introduces computationally efficient numerical algorithms and computer systems for both tensor decompositions and problems involving tensor networks. On the algorithmic side, we present novel inexact solvers that not only outperform standard methods in terms of computational costs but also offer theoretical guarantees on the accuracy of approximations. On the system side, we develop libraries that efficiently automate the algorithmic development for tensor networks, thereby enhancing their practical usability. In the first part of the thesis, we explore how alternating minimization, the most common iterative algorithm for tensor decompositions, can be made more efficient and can be done in an automated manner. We propose an inexact solver named pairwise perturbation, which uses the fact that the tensor decomposition outputs change little when the algorithm approaches convergence, and uses the previously-computed normal equations with perturbative corrections to approximate the exact normal equations to reduce the asymptotic cost. Moreover, a major challenge to efficient alternating minimization is making use of the shared sub-structure of tensor contractions computed at each iteration. We introduce an automatic differentiation system named AutoHOOT, which incorporates tensor algebra-specific transformations and includes algorithms to automatically amortize shared sub-structure of tensor contractions across subproblems in each iteration of alternating minimization. Each subproblem in an iteration of alternating minimization is a linear least squares problem with the left-hand-side matrix being tall and skinny with a specific tensor network structure. Such problems motivate the second part of the thesis, which includes novel sketching algorithms for both tensor decompositions and tensor networks. Sketching involves employing random matrices, also known as embeddings, to project data onto low-dimensional spaces, thereby reducing the computational cost of subsequent operations. In the context of data with a tensor network structure, we present efficient algorithms that utilize tensor network-structured embeddings to sketch the data. Moreover, we provide theoretical bounds on the accuracy of sketching achieved through these algorithms. The proposed sketching techniques are used to accelerate various problems involving tensor networks, including tensor decompositions. The third part includes algorithms to approximate the output of tensor network contraction, which explicitly evaluates the single tensor represented by a given tensor network and is widely used in statistical physics, quantum computing, and computer science. We introduce methods to efficiently approximate tensor network contractions using low-rank approximations, where each intermediate tensor generated during the contractions is approximated as a low-rank tree tensor network. We introduce CATN-GO, an algorithm that uses graph theory to analyze the tensor network structure and generates contraction paths (a rooted binary tree showing how the tensor network is contracted) that yield both the minimum number of approximations and the minimum computational cost, which improves both efficiency and accuracy. In addition, we introduce another algorithm named Partitioned Contract, which has the flexibility to incorporate a large portion of the tensor network when performing low-rank approximations to reduce the truncation error, and includes a cost efficient algorithm to approximate any tensor network into a tree structure. In the fourth part, we consider applications of tensor decompositions in quantum computing. We use canonical polyadic (CP) tensor decomposition to simulate and analyze quantum algorithms. We successfully simulate multiple quantum algorithms including the Grover’s search, quantum Fourier transform, and quantum phase estimation using low rank CP decomposition, and we analyze the entanglement properties of specific quantum states using the CP decomposition rank.
- Graduation Semester
- 2023-12
- Type of Resource
- Thesis
- Copyright and License Information
- Copyright 2023 Linjian Ma
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…