Anna's Journal

Anna's Web Page

Older entries >>

5 most recent entries:

"Investigating Data Models for Automatically Generating Tests for Web Applications"

This paper looks at several factors that affect the parameters of a web page, and how they could allow us to test better and more efficiently. The factors that were explored include parameter interactions, history, and user roles. Parameter interaction has to do with looking at probabilities of a set, rather than individual parameters. History is fairly self-explanatory and looks at if a user's previous actions can help predict where they will go next. Lastly, user roles has to with a user's specific role or permission in a website and how that would affect the parameter values. The idea is that these factors could help us predict what parameter values should be so that we can develop more accurate test suites.

For me, this paper led me to have a lot of questions. In order to use these factors to predict parameter values, we need to be able to quantify them. This seems like a challenge, since things like user role and history may not be easily converted into something computational. Additionally, how do you actually figure out in what way a factor might affect a parameter value? Since there are many options for the values, this seems difficult.

This paper, and my question about it, lead directly into what I am working on. I am trying to determine how we can combine two or more factors and use joint conditional probability to improve the parameter values even more. This paper was very helpful to read because it really got me thinking about the various interactions between factors and parameters values.

2011/02/02 19:44 · 0 Comments

January 2011 Update

Since I was abroad last semester, I am just joining the project. I have spent the last couple weeks going over the material from Professor Sprenkle's Software Engineering Through Web Applications class to get a solid base of information. These labs cover the basics of web applications, including html and CSS formats, servlets, and how to use subversion.

As we prepare for SSA Conference at W&L and the Tapia Conference, I am starting to work with Lucy on factors that affect the parameter values. I am currently trying to figure if/how we can combine factors that we know are relevant to get a better test suite.

2011/01/24 16:56 · 0 Comments

"An Exploration of Statistical Models for Automated Test Case Generation"

Web-based application are difficult to test, and many test suites do not have good coverage and/or quality. The authors' goal is to present a new and improved technique for generating test cases and to investigate the effectiveness of the test suites that are generated from the various models discussed in this paper. The authors contribute several modeling methods based on statistical machine learning techniques that are accurate and have high coverage. These models represent the dynamic nature of web applications.

A common method for testing is to use logged user data to model the dynamic behavior of a web application. This paper expands on this idea, but instead of using the logged data directly, the data is used in conjunction with machine learning techniques to automatically build models from the data. They consider questions about most/least likely user sessions, order of navigation through web pages, and others to generate user sessions based on the statistical distribution represented in the logged data. To determine the probability of each request, they used conditional independence assumptions, or Markov assumptions. This method requires knowing fewer prior requests, so the probabilities can be represented compactly. A bigram is a case when only the previous request is needed to get the probability of the next request and a trigram requires the two prior requests.

The researchers evaluated the effectiveness of 6 variations of the model using 5 test suites containing 200 random user sessions. They were used in an example bookstore web application. The conclusion of the study was interesting. The 1-gram model, which looks at little prior history, had less successful book purchases, but had the highest percentage of code coverage. It would seem that by randomly generating test cases, this model managed to get to more error cases than the models designed to look more like real user sessions. The authors discuss several limitations, specifically the fact that only one application was tested and the way in which the user session data is generated. It is very possible that the 1-gram model is not the most effective overall if it were tested for a more varied set of applications. I think it would be a good idea to group types of web applications and compare the models that work best for each category. For example, web applications where you can browse and buy products should be tested differently than search engines or social networking sites. The authors also mention that users were given a list of suggested activities to perform when generating “random” user sessions. They worry that this does not represent accurate user sessions. I definitely think that we need to consider that there may be other factors besides history that could affect the model. Even the best model only has about 55% code coverage, which could definitely be improved.

Do you think the 1-gram would be as effective for other types of web applications? And how about a combination of a 1-gram and a higher-gram model to generate a distribution of random and realistic user sessions? What other factor could be integrated into the model to make it better?

2010/09/08 13:01 · 0 Comments

Anna

ok this is a test :)

2010/05/18 18:40 · 0 Comments