Product Lists & Filtering Study Methodology

This study examines the user’s product finding process and experience. It specifically looks at how sites can improve the user’s ability to find, evaluate and select just a handful products relevant to their needs and interests from the hundreds of products available on most e-commerce sites. This includes page areas such as the overall product list layout, the specific list item design, and the filtering, sorting and comparison features, for both category- and search-based product lists (i.e. search results page).

This Product Lists & Filtering usability study is based on two main research components:

  1. Multiple rounds of large-scale usability testing (1:1 think-aloud user testing at 19 leading e-commerce sites) leading to 93 Product Lists & Filtering usability guidelines described in the Product Lists & Filtering usability report, and
  2. Benchmarking of 50 leading US e-commerce sites, using the 93 Product Lists & Filtering usability guidelines, as the benchmark heuristics and scoring parameters.

Below the methodology for each of the research methods is described in detail.

To purchase access to the Product Lists & Filtering Usability Report & Benchmark go to: baymard.com/ecommerce-product-lists

Usability Testing Methodology

One part of this research is based on a large-scale usability study of 19 major e-commerce sites. The usability study tasked real users with finding, evaluating and selecting products matching everyday purchasing tasks such as finding a case for their current camera, an outfit for a party, an interesting movie, etc.

The 1:1 “think aloud” test protocol was used to test the 19 sites: Amazon, Best Buy, Blue Nile, Chemist Direct, Drugstore.com, eBags, Gilt, Go Outdoors, H&M, IKEA, Macy’s, Newegg, Pixmania, Pottery Barn, REI, Tesco, Toys’R’Us, The Entertainer/TheToyShop.com, and Zappos. Each test subject tested 4 - 8 sites, depending on how fast they were. The duration of each subject’s test session varied between 1 and 1.5 hours, and the subjects were allowed breaks between each site tested.

During the test sessions the subjects experienced 700+ usability issues specifically related to the product list design and tools. Crucially, significant performance gaps were identified between the tested sites’ product list implementations. Sites with just mediocre product list implementations saw massive abandonments rates of 67-90% whereas other sites with better implementations saw only 17-33% abandonments, despite the test subjects trying to find the exact same products. These sites all had equally relevant products to what the test subjects were looking for, the difference stemmed solely from the design and features of each site’s product list.

All of these findings have been analyzed and distilled into 93 specific usability guidelines on how to leverage these significant performance gains by better design and implementation of the list layout and list item design, as well as the filtering, sorting and comparison tools. The 93 design guidelines identify and describe major roadblocks in the user’s product finding, evaluation and selection experience.

Since there will always be contextual site differences, the aim of this study is not to arrive at statistical conclusions of whether 16.1% or 17.2% of your users will encounter a specific issue. The aim is rather to examine the full breadth of the user’s product list experience, and present the issues which are most likely to cause a poor product finding experience (and consequently a potential loss of sales). And as importantly, to present solutions and design patterns that during testing were verified to resolve or lessen these usability issues.

For a study following the think aloud protocol, the binomial probability formula show that 95% of all usability problems with an occurrence rate of 14% or higher will be discovered on average, with 20 test subjects used.

This Product Lists & Filtering study also draw on select findings from our previous studies on Mobile E-commerce Usability, Homepage & Category Navigation, E-commerce Search Usability, along with a large scale eye-tracking study with 33 participants.

Benchmarking Methodology

The other part of this research study is a comprehensive usability benchmark. Using the 93 usability guidelines from the large-scale usability tests as the review heuristics and scoring parameters, we’ve subsequently benchmarked the product list layout, list item design, filtering, sorting and comparison tools at 50 top grossing US e-commerce sites. This has resulted in a benchmark database with more than 4,500 product list parameters reviewed, 2,200 additional examples for the 93 guidelines, and 200 product list page examples from top retailers, each annotated with review notes.

The total UX performance score assigned to each benchmarked sites is essentially an expression of how good (or bad) a product list, filtering and sorting experience a first-time user will have at the e-commerce site – based on the 93 e-commerce product list usability guidelines documented in the Product Lists & Filtering Usability report.

The specific score is calculated using a weighted multi-parameter algorithm:

Below is a brief description of the main elements in the algorithm:

  • An individual guideline weight: A combination of the Severity of violating a specific guideline (either Harmful (worst), Disruptive or Interruption, as defined in the usability report), and the Frequency of occurrence of the specific guideline (i.e. how often the test subjects experienced it during the usability study).
  • A Rating describing to which degree a specific site adheres to each guideline (Adhered High, Adhered Low, Neutral, Issue resolved, Violated Low, Violated High, N/A).
  • The scores are summarized for all guidelines, and then divided by the total number of applicable guidelines (to ensure “N/A” does not influence the score).
  • The Highlights marked at the site screenshots are specific examples that the reviewer judged to be of interest to the reader. It’s the site’s overall adherence or violation of a guideline that is used to calculate the site’s usability score. Thus, you may find a specific Highlight that shows an example of how a site adheres to a guideline, even though that same site is scored to violate the guideline (typically because the site violates the guideline at another page), and vice versa.

All site reviews were conducted by Christian Holst, Jamie Appleseed and Thomas Grønne, from December 15, 2014 to February 19, 2015. A US-based IP address was used. In the case multiple local or language versions of a site existed, the US site version was used for the benchmark.

All reviews were conducted as a new customer would experience them - hence no existing accounts or browsing history were used. The documented and benchmarked designs at each site were: the product lists on both category pages and search results, filtering tools, sorting tools, product comparison tools, and the product page. One specific page from a site is shown in the benchmark, but the reviewer investigated 15-30 other pages which were used for the benchmark scoring as well.

Baymard Institute provide this information “as is”. It is based on the reviewers subjective judgement of each site at the time of testing and in relation to the documented guidelines. Baymard Institute can’t be held responsible for any kind of usage or correctness of the provided information.

The screenshots used may contain images and artwork that are both copyright and trademark protected by their respective owners. Baymard Institute does not claim to have ownership of the artwork that might be featured within these screenshots, and solely capture and store the website screenshots in order to provide constructive review and feedback within the topic of web design and web usability.

Citations, images, and paraphrasing may only be published elsewhere in limited extend, and only if crediting “Product Lists & Filtering Usability study by Baymard Institute, baymard.com/research/ecommerce-product-lists

Close overlay