Table of Contents
If you want to get involved in one of these projects, see Professor Sprenkle.
Automated test-case generation
Web application code is often large and complicated because it must handle millions of user requests dynamically and process massive amounts of information quickly and efficiently. Therefore, web application code is prone to errors and requires effective testing to expose errors. Testing based on user requests is promising because user requests are cheap to record and testing focuses on what users actually do. In 2005, Sant et al. proposed a promising user-based test case generation approach. However, their algorithms for choosing parameter values were limited to value combinations from the original user requests, and their evaluation was based on only one application.
The goal of our project is to improve upon Sant et al.'s control and data models to generate more effective test cases that more accurately emulate actual users.
Automated oracle comparators
Software developers need automated techniques to maintain the correctness of complex, evolving Web applications. While there has been success in automating some of the testing process for this domain, there exists little automated support for verifying that the executed test cases produce expected results. We assist in this tedious task by providing a suite of automated oracle comparators for testing Web applications. To effectively identify failures, each comparator is specialized to particular characteristics of the possibly nondeterministic Web applications' output in the form of HTML responses. We are building on our previous work, with the goal of developing even more effective oracle comparators.
National Science Foundation grant winners have to submit annual reports to NSF. The report describes the various activities and results from the past year. Often winners get to that time, and forget what all they did that year.
They need a way to enter activities, tag them based on the nsf research.gov categories of where it will get reported in the final report, and any other information about the task, e.g., when complete, who completed it, how many people involved,…
Then when they have to write the report, they could search on particular tags for certain date ranges and get a start of a report of activities that they can then just smooth out into paragraphs.
It would be a web interface that is private to only those people we are allowed to enter data and create reports, basically the PIs on the grant.
Categories of activities:
I have an example report, but I don't want to post it online.
Could use similar fields/information for W&L faculty's Faculty Activity Reports.
Past Undergraduate Research Projects and Their Outcomes
- Analyzing statistical usage-based navigation models for web applications and their resulting test cases (led to an ICST 2011 paper, which received the best research paper award, and an ICST 2012 paper)
- Comparing data models for automatically generating test cases for web applications
- Developing automated oracle comparators for web applications (led to an ISSRE 2007 publication)
- Developing WebVizOr, a tool for viewing the HTML results from executing test cases (led to a TAIC-PART 2008 publication)
- Developing tools for logging user accesses to Web applications, creating user sessions from the logged accesses, and automatically replaying the generated user sessions and other test cases (used as part of framework in several publications).
- Customizing an online digital library, which was later used as a subject application in later testing research (included in several publications)
- Mutating Web application code to enable failure detection experiments (led to a GHC poster)