Click here for Top of Page
Right Arrow: Next
Right Arrow: Previous
Newsletter

 

The Quantitative Analysis of User Behavior Online—

Data, Models and Algorithms

 

By Prabhakar Raghavan (USA)

Reviewed by Hülya Yalçin (USA)

Dr. Raghavan began his talk by stating his interest in the users’ online behaviour, and in finding where they look at on the screen. He is interested in watching the gaze of people, but doing that at the scale of millions of people.  No wonder so much research goes into understanding the role human gaze plays in search and query processes, if the company is one of the pioneers in search engine business, Yahoo!.

The agenda of Dr. Raghavan’s talk was behavioral and computational studies on two dimensional search results and measuring online user engagement.

First, Dr. Raghavan briefly explained how classical one dimensional search takes place. Each document matching a query is assigned a score. Designers of the search engine pick hundreds of features, such as the number of links into a page or the number of occurrences of query terms in the page. Then, the editors of the search engine create search training data with a long series of tuples, each tuple consisting of query, document, and relevance judgment. Relevance judgment indicates how relevant a document is to a query. For example, if the Yahoo! home page appears first when searching “yahoo”, then relevance judgment is a perfect match. If the HP home page appears first when searching “ibm”, then relevance judgment is a poor match.

With classical one dimensional search, the objects are listed in decreasing order with highest score object being listed first. The eye gaze of the user is trivial to estimate, since the user scans the results page by page, from top to the bottom of the page.

However, in image and product searches, images are ordered by a decreasing score in row-major order. There is a variety of evidence suggesting that the user’s eye scans don’t go in row-major order. How does the eye scan the page?

Understanding how the user’s eye scans the page is an important research topic for companies in the search engine business, especially for advertisement. The most important question is how should the objects in the results page be laid out? Dr. Raghavan stressed the need for a more general 2-d layout where the objects are laid out more heterogeneously in the results page. Considering the results page as 2-d real estate, the problem then boils down to boosting the richer use of this 2-d real estate, and optimizing every pixel. Given the results of the query, how should the objects be placed on the results page, what is the best layout that optimizes the 2-d representation? And what does best mean? For instance, in a 1-dimensional classical search, the top scoring object is placed at the top of the page. A 2-d analog of this is sought.

Actually, the problem goes beyond image/product results matrices. The visual cues that drive the eye tracking in search engines are not very well understood. They can log the user’s click trails, but they can’t log why they click what they click. They decided that combining eye gaze trails with click logs might yield a better approach to this problem.

Researchers at Yahoo! formulated eye scans as a Markov chain, M, where each slot is a state and at each state, the user may click, stop, or proceed to the neighboring slots. The measure of the best layout is the expected total score of objects seen. Given Markov chain M and a set of objects where each object has a utility and a stopping probability, the utility of that object increases when a user clicks on an object. The problem then becomes an optimization problem of finding an embedding of objects that maximizes the expected total user utility. This model also models the revenue maximization for placing advertisements on a website, the ultimate goal of the search engine companies. This model is of course subject to some criticism, since the probabilities may depend on surrounding images.

Another obstacle in this model is that the underlying Markov chain is not known.  Although, the user’s click sequence (trails) on a query is known and the Markov chain can be estimated from these queries, the problem is that successive clicks may not be on adjacent slots. Maximum likelihood model estimation of this problem is NP-hard, despite the grid structure.

Experimental results with Markov chain inference validate the eye-tracking observations that the user’s eye scans don’t go in row-major order. Apparently, the user’s eyes gaze the web page within what they call a golden triangle towards the upper left corner of the results page, rather than a row-major tracking. The researchers at Yahoo! also found out a silver triangle towards the bottom right corner of the results page. Guess what? User’s eyes also scan the bottom right corner of the results page to go to the next page!

Designers at Yahoo devised a fast placement algorithm called HIT that computes the hitting times of the slots in M and orders objects by decreasing score in increasing order of hitting times. HIT dominates all the other simplistic algorithms such as EIGEN, COLUMN and ROW by stating the ordering from the Markov chain independent of images to be placed.

These findings prove that it is possible to combine observational and click mining.  Dr. Raghavan concluded his talk by stating that more experiments are needed, especially with non-grid layout.  He also stressed the difficulty of Markov chain estimation with a non-grid layout.

Prabhakar Raghavan is the head of Yahoo! Labs. Raghavan's research interests include text and web mining, and algorithm design.

He is a consulting professor of Computer Science at Stanford University and formerly editor-in-chief of the Journal of the ACM.

He has co-authored two textbooks, on randomized algorithms and on information retrieval.

Prior to joining Yahoo!, he was the chief technology officer at Verity and has held a number of technical and managerial positions at IBM Research.

FEATURE:

 

ICPR 2010

Plenary Talk