Lucy's Journal

5 most recent entries:

abstract

As web applications become increasingly complex, automated testing becomes difficult due to the overload of information. The state of the art mechanisms of user-based testing involve automatically generating test suites in two main steps. First, the tester generates a sequence representing a user’s path through the application, and later she fills in parameter values. We focus on the second stage of generation, called the data model, and we propose a number of characteristics to classify parameters. Parameters are categorized based on their use in the applications as well as their treatment by the user. Based on the classification of the parameter, we can then pick the best combination of factors to determine the best or most realistic parameter value. Our factors are comprised of the other possible elements of the user’s path through application that may affect the value of the parameter. By determining which of these factors is most relevant given the type of parameter, we are able to build a more accurate data model and thus more accurately model our dersired test suites.

2011/01/20 15:41

"A Combinatorial Approach to Building Navigational Graphs for Dynamic Web Applications" (Wang, Lei, Kacker, Kuhn, Sampath, Lawrence)

The goal of this paper is to effectively model the navigation of a dynamic web application. They divide the problem into two:

1. The page explosion problem–the number of possible pages makes it hard to consider each page individually.

2. The request generation problem–there are many pages that can only be reached when the user has already visited certain other pages.

Their contributions are: 1. An “abstraction scheme” to address the page explosion issue–“pages that are likely to have the same navigation behavior are grouped together” as one node 2. Combine parameter values by using pairwise coverage.

Notably, they do not use user sessions to create the models.

This group splits a URL into two components–the “base” and the “query.” The query is composed of the parameter name-value pairs, and the base is the rest of it. An abstract URL, for them, is the base + parameter names. They consider pages with the same abstract URL equivalent, like the other papers we have been reading.

The Combinatorial Strategy… For a form with k parameters, each of which has a different number of values (d), one cannot try to test every possible permutation. In their testing, “given any two out of the k parameters, [they] ensure that every combination of the two parameters is covered in at least one submission.” An important feature that they note is that all parameter values get tested in 2^n tests, where n is the number of parameters. They use an extended version called In-Parameter-Order, which slowly extends the pairwise test to cover all the parameters

This paper also talks a lot about web crawling and compares the algorithms of web crawlers to the way the navigation model is created. They say, “One common approach used…is to build a pre-defined list of values for the parameters that are frequently encountered,” and go on to write that this was helpful for them as well.

Something I do not understand about the paper is why they talk about using the combinatorial approach and pairwise coverage, yet on page 215 discuss how to get parameter values–ie either from the user as they are needed, or from a pre-approved list. How do these mesh?

To create the navigation model, they essentially perform a non-recursive depth-first search, starting from the home-page of the web application. To maintain state, they undo any state changes when “backing up” in the tree. They also avoid loops by disregarding any duplicate pages (which, they concede, is a limitation, but necessary).

Limitations:

-Pairwise coverage… why does this seem like a good idea?? Parameters are dependent on one another. Also, all parameters do not have the same sets of possible values. Also, with IPO–isn't this extremely inefficient??

-(p 214) They assume that people always start at the home page of a web application, which is certainly not true.

-They “assume that the navigation behavior of a web page…does not depend on specific parameter values” (216). This is definitely not a safe assumption to make! For example, imagine an application that lets users log in, and suppose there is more than one type of user (i.e. 'customer' and 'administrator' or 'seller'). If they assume that parameter values do not make a difference, the pages associated with these types of users might not get tested.

-Another limitation, which they specifically say, is that “the number, the size…, and the specific technologies…of the subject applications prevent a generalization of [their] results.” So, isn't the entire point to generalize???

2010/10/04 11:29

"Statistical Testing of Web Applications" (Tonella, Ricca)

Summary

This paper's goal is to semi-automatically recreate the realistic test suites for any given web application. They analyze the html code generated by the application and use employ humans to determine all the input values “which cover all relevant navigations.” They also use a reverse engineering tool that they previously developed to “automatical[ly] extract… [an] explicit-state model of the … web application.”

Based on the variables pre-specified by the user, which create equivalence classes, they supply the application with a particular list of input in a separate file. When the user has not identified equivalence classes of input or when the web application is in a different state, for which the user's specifications do not apply,, the authors use a semi-automatic process called page-merging to “simplify the explicit state model” and determine equivalence classes for the input. They have three (decreasingly automatic) criteria for comparing two dynamically-generated html pages:

a) pages that are literally identical are considered the same page

b) pages that have “identical structures but different texts, according to a comparison of the syntax trees of the pages, are considered the same page”

c) pages that “have similar structure, according to a similarity metric, such as the tree edit distance, computed on the syntax trees of the pages, are considered the same.”

Essentially: if the pages are not identical, they look at the syntax with varying degrees of leniency. They also give a summarization of the main phases in statistical testing:

1) Construction of a statistical testing model, based on available user data

2) Test case generation based on the statistics encoded in the testing model (modeled as a Markov chain, called the usage model; the transition probabilities )

3) Test case execution and analysis of execution putout for reliability estimation Their main contribution is in the first step, with the semi-automatic creation of equivalence classes. They also deal with state by exploiting the hidden variables that determine whether the state stays constant.

Limitations:

In general, this paper seems to have some excellent ideas–like state parameters and equivalence classes of parameter values–but it relies far too heavily on humans to make decisions that should be automated.

  1. Why are these authors so intent on creating the most likely user sessions? Shouldn't everything be tested? They never really give a reason for wanting to order the test cases from most-likely to least-likely. They say “complete testing of all paths/interactions is unfeasible for any non-trivial system. Statistical testing aims at focusing on the potions of the system under test that are more frequently accessed, in order to ensure that reliability of the delivered product is high.”
  2. The authors deal with state changes, but do not make it clear how they incorporate these hidden parameters into the models, or how they determine the hidden parameters. Making state a variable is an interesting idea, however, and I think it has a lot of potential, but I think it would be simpler to model it separately.
  3. The authors have a usage model, which incorporates both the HTML requests and the parameter values. As the previous paper we read showed, the combination of parameter values and html sequence makes a model extremely complex. The question is, however, whether this complexity is necessary or whether it is better to separate into different models.
  4. The paper also focuses on the rate of failure, but does not define what a failure is, and passes over the fact that “the test engineer…checks whether the pout is correct” for each page.
  5. It is not feasible to have the equivalence classes created by a human, especially for very complex web applications. Additionally, it is easy for a human to make a mistake.
2010/09/23 15:19 · 0 Comments

My thoughts on "An Exploration of Statistical Models for Automated Test Case Generation" (Sant, Souter, Greenwald)

Summary

- This paper explores the automatic creation of test suites from logged user sessions. They use two models to generate the test cases—a control model, which regulated the URLs in the test case, and a data model, which entered values for each parameter required in every given URL. Both the control model and the data model were statistical Markov models built based on the user data.

- The paper also introduces the term important parameter. For them, an important parameter is one whose value, at one point in the user sessions, stayed constant over two consecutive requests.

- The authors test varieties of two data models, Simple, which “captures only the probability that a set of values…is present on the given page” (unigram), and Advanced, which considers both the previous page (bigram) and, if there are any important parameters, it automatically takes on those values. Ultimately, they use uni-, bi-, and tri-gram models of simple and advanced to create test suites and found, surprisingly, that the unigram model quickly achieved the highest percent of statement coverage.

Limitations

- They recognize that a major limitation of their study is that it only included one application, an online bookstore. However, they do not acknowledge that their measure of success is faulty as well—they rank successful test suites based on the number of book purchases in those suites. The paper states that “a valid user session is one that exercises an application in a meaningful way,” but it does not acknowledge that there are many uses for an online book store that do not include purchasing books—for example, browsing, checking on certain information about a book (author, publishing date, etc.), comparing prices (with other bookstores). It is admittedly much harder to tell what information a user was looking for if she does not buy the book, but it should be noted that a user session that does not contain a book purchase is only unsuccessful for the bookstore, not the web developer. This measure of success also cannot easily extend to other applications, which presents a huge limitation as well. However, the authors redeem themselves by using the percent of statements covered as another comparison measure.

- The paper compares the rate of book purchase per user session from the real user sessions (1.5) to the rates produced in the test suites (which all fell between .4 and .8). This is a more valid comparison. However, it is not necessarily best to have the goal of generating the most realistic test suites—they should contain both the most and least likely user sessions. In fact, this seems to be an open question: it is easier to determine what the least and most likely sequences of requests are, but what should the test suites actually contain? As shown by the results, the unigram (random, essentially) quickly covered more statements than the other models (though statement coverage for all of them converged after about 40 sessions). They note that this is because the bi- and tri-gram models are, in a sense, too predictable.

- The authors write, “Our original motivation…was to be able to generate user sessions that combined subsequences from different original user sessions to create sequences that had not been seen previously. These novel user sessions would exercise parts of an application not exercised on the original test suite.” Despite this and a question in the introduction, however, the paper does not explore the idea that the unigram model does so well because it is random—but why is random good? One possible answer is because it finds unusual/unlikely user actions; another is that it simply has a better chance of generating more diverse sessions than bi- and trigram models mimicking the user sessions.

- An unrelated limitation is their narrow view of important parameters. A parameter is important if its value stays constant over two consecutive requests. The idea of important parameters is very helpful, but it is important to note that if by chance a parameter’s value stays constant just once, it will be constant in all the test suites. This should not happen. One simple way to address it would be to assign each parameter a probability which measures how likely the value is to stay constant.

2010/09/09 16:05
webapptesting/journals/simko.txt · Last modified: 2010/09/23 13:31 by admin
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0