Withdraw
Loading…
High-performance computing for smart grid analysis and optimization
Liang, Yi
Loading…
Permalink
https://hdl.handle.net/2142/105575
Description
- Title
- High-performance computing for smart grid analysis and optimization
- Author(s)
- Liang, Yi
- Issue Date
- 2019-05-10
- Director of Research (if dissertation) or Advisor (if thesis)
- Chen, Deming
- Doctoral Committee Chair(s)
- Chen, Deming
- Committee Member(s)
- Wong, Martin D.F.
- Huang, Jian
- Zhu, Hao
- Department of Study
- Electrical & Computer Eng
- Discipline
- Electrical & Computer Engr
- Degree Granting Institution
- University of Illinois at Urbana-Champaign
- Degree Name
- Ph.D.
- Degree Level
- Dissertation
- Keyword(s)
- High-performance computing
- Optimization
- Machine learning
- Smart grid
- Optimal power flow
- Probabilistic optimal power flow
- Geomagnetic disturbance
- Geomagnetically induced current
- Load forecasting
- Simulated annealing
- Energy
- Abstract
- The smart grid leverages a variety of advanced technologies, including smart metering, smart equipment, communication and control technologies, renewable energy sources, and machine learning, to improve the efficiency and reliability of existing electric power systems. The efficiency and reliability of power systems are of considerably importance to economic and environmental health in this new era. However, there are significant challenges for modernizing the power grids and accomplishing the vision of the smart grid. This dissertation presents a variety of optimization techniques that solve several key challenges in the smart grid and improve its efficiency and reliability. Optimal power flow (OPF) plays an important role in power system operation. The emerging smart grid aims to create an automated energy delivery system that enables two-way flows of electricity and information. As a result, it will be desirable if OPF can be solved in real time in order to allow the implementation of the time-sensitive applications such as real-time pricing. We develop a novel method, the fast OPF algorithm, to accelerate the computation of alternating current optimal power flow (ACOPF). We first construct and solve an equivalent OPF problem for an equivalent reduced system. Then, a distributed algorithm is developed to retrieve the optimal solution for the original power system. Experimental results show that for a large power system, our method achieves 7.01X speedup over ACOPF with only 1.72% error, and is 75.7% more accurate than the DCOPF solution. The experimental results demonstrate the unique strength of the proposed technique for fast, scalable, and accurate OPF computation. With the integration of intermittent renewable energy sources and demand response in the smart grid, there is increasing uncertainty involved in the traditional OPF problem. Therefore, probabilistic optimal power flow (POPF) analysis is required to accomplish the electrical and economic operational goals. We propose a novel method, the ClusRed algorithm, to accelerate the computation of POPF for large-scale smart grid through clustering and network reduction (NR). A cumulant-based method and Gram-Charlier expansion theory are used to efficiently obtain the statistics of system states. We also develop a more accurate linear mapping method to compute the unknown cumulants. ClusRed can speed up the computation by up to 4.57X and can improve accuracy by about 30% when Hessian matrix is ill-conditioned compared to the previous approach. Aside from improving the efficiency and reliability of power grids through addressing OPF related problems, we also study geomagnetic disturbances (GMDs) and how to mitigate their threat to the reliability of power grids. Geomagnetically induced currents (GICs) introduced by GMDs can damage transformers, increase reactive power losses and cause reliability issues in power systems. Finding an optimal strategy to place blocking devices (BDs) at transformer neutrals is essential to mitigating the negative impact of GICs. We develop a branch and cut (BC) based method and demonstrate that the BC method can provide optimal solutions to OBP problems. Furthermore, to practically solve the OBP problem, it is also important to account for the potential impact of BD placement on neighboring interconnected systems, solve the case where per-transformer GIC constraint exists and take the time-varying nature of the geoelectric field into consideration. In addition, the combined complexity of solving the OBP problem on a large-scale system poses a big computational challenge. However, together with other existing methods, the BC method cannot address the above issues well due to its algorithmic limitations. We then develop a simulated annealing (SA) based algorithm that, for the first time, can achieve near-optimal solutions for OBP problems for the above scenarios at a reduced computational complexity. More importantly, the SA method provides a comprehensive framework that can be used to solve various OBP problems, with different objective functions and constraints. We demonstrate the effectiveness and efficiency of our BC and SA methods using power systems of various sizes. In addition to natural disasters, in the era of internet of things, cybersecurity is of growing concern to power industries. Malicious cyberbehaviors and technologies that used to challenge security in areas unrelated to power systems, such as information integrity or privacy, have suddenly started to endanger the safety of large-scale smart grids. In particular, short-term load forecasting (STLF) is one of many aspects that are subject to these attacks. STLF systems have demonstrated high accuracy and have been widely employed for commercial use. However, classic load forecasting systems, which are based on statistical methods, are vulnerable to training data poisoning. We build and implement a first-of-its-kind data poisoning strategy that is effective at corrupting the forecasting model even in the presence of outlier detection. Our method applies to several forecasting models, including the most widely adapted and best-performing ones, such as multiple linear regression (MLR) and neural network (NN) models. Starting with the MLR model, we develop a novel closed-form solution that enables us to quickly estimate the new MLR model after a round of data poisoning without retraining. We then employ line search and simulated annealing to find the poisoning attack solution. Furthermore, we use the MLR attacking solution to generate a numerical solution for other models, such as NN. The effectiveness of our algorithm has been demonstrated on the Global Energy Forecasting Competition (GEFCom2012) data set with the presence of outlier detection.
- Graduation Semester
- 2019-08
- Type of Resource
- text
- Permalink
- http://hdl.handle.net/2142/105575
- Copyright and License Information
- Copyright 2019 Yi Liang
Owning Collections
Dissertations and Theses - Electrical and Computer Engineering
Dissertations and Theses in Electrical and Computer EngineeringGraduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…