Withdraw
Loading…
Empowering automated mobile UI testing with external support
Wang, Wenyu
Loading…
Permalink
https://hdl.handle.net/2142/115901
Description
- Title
- Empowering automated mobile UI testing with external support
- Author(s)
- Wang, Wenyu
- Issue Date
- 2022-07-08
- Director of Research (if dissertation) or Advisor (if thesis)
- Xie, Tao
- Doctoral Committee Chair(s)
- Xie, Tao
- Committee Member(s)
- Marinov, Darko
- Xu, Tianyin
- Prasad, Mukul
- Department of Study
- Computer Science
- Discipline
- Computer Science
- Degree Granting Institution
- University of Illinois at Urbana-Champaign
- Degree Name
- Ph.D.
- Degree Level
- Dissertation
- Keyword(s)
- UI testing
- software testing
- mobile app
- Abstract
- While mobile devices have become an integral part of modern daily life, the ever-growing complexity and fast pace of app feature development have imposed unprecedented challenges on making these mobile apps robust and reliable. User Interface (UI), as the primary medium of user interactions, is naturally a good entry point for testing mobile apps. While manual and scripted UI testing is a common practice, automated UI testing is becoming increasingly popular. By mimicking how human users interact with apps through the UIs, automated UI test generation tools are capable of detecting reliability and usability issues, complementing manual and scripted testing by requiring little to no human testing effort. There have been numerous mobile UI test generation tools from both the research community and industry after years of development, mainly focusing on designing novel exploration algorithms on the Android platform. Despite showing the best overall test effectiveness in their own evaluation settings, most of the existing tools are found to barely outperform a baseline tool, Monkey, when evaluated upon comprehensive sets of Android apps. The observation is made by independent measurement studies (including one described in this dissertation) involving both relatively simple open-source apps and popular, complex industrial apps. The finding, contradicting researchers’ common belief, suggests a significant effectiveness gap that needs to be filled for automated mobile UI testing, especially on industrial apps with generally high impacts. Aiming to understand the aforementioned effectiveness gap, we empirically investigate the test process and results from our measurement study, yielding three relevant findings: (1) there is no “silver-bullet” tool that outperforms all other tools on every app, suggesting that it is difficult to build a single tool that adapts well to different apps with diverse UI designs; (2) a tool’s test effectiveness is not solely decided by its exploration algorithm, and the tool’s implementation also makes differences; (3) it is possible to enhance the design or the implementation of different tools using unified approaches. In the context of existing work’s focus on designing novel exploration algorithms, our findings suggest that it is worthwhile to develop complementary techniques that enhance existing tools to unleash the power of different exploration algorithms on various complex industrial apps. Inspired by the aforementioned findings, this dissertation presents three parts of research that explore the possibilities for existing automated UI test generation tools to be empowered with external automated support (i.e., techniques that are applicable to various tools while keeping them fully automatic). These parts of work enhance different components in the workflow of automated Android UI test generation tools. The first part (TOLLER) focuses on enhancing infrastructure support that enables a tool’s exploration algorithm to obtain states from and execute actions on the test device, allowing the tool to iterate faster and cover more App Under Test (AUT) functionalities within limited time. The second part (VET) focuses on providing exploration guidance for a tool on a specific AUT, based on our observation that a tool’s exploration algorithm or implementation might have applicability issues in certain conditions. The third part (EPIT) focuses on parallelization coordination for a tool and a specific AUT on multiple test devices to improve the overall test effectiveness or reduce testing costs by reducing overlapped explorations. Our evaluations show that the proposed techniques can help state-of-the-art automated Android UI test generation tools achieve substantially better test effectiveness or reduce testing costs on popular and complex industrial apps.
- Graduation Semester
- 2022-08
- Type of Resource
- Thesis
- Copyright and License Information
- © 2022 Wenyu Wang
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…