Homepage & Category Study Methodology

This study examines the user’s homepage and category navigation experience. It specifically looks how users navigate, find and select products on e-commerce sites. This includes areas such as pages and design elements include the homepage, category navigation, sub-categories, and product lists.

This Homepage & Category usability study is based on two main research components:

  1. Multiple rounds of large-scale usability testing (1:1 think-aloud user testing at 19 leading e-commerce sites) leading to 79 Homepage & Category usability guidelines described in the Homepage & Category usability report, and
  2. Benchmarking of 50 leading US e-commerce sites, using the 79 Homepage & Category usability guidelines, as the benchmark heuristics and scoring parameters.

Below the methodology for each of the research methods is described in detail.

To purchase access to the Homepage & Category Usability Report & Benchmark go to: baymard.com/homepage-and-category-usability

Usability Testing Methodology

One part of this research is based on a large-scale usability study of 19 major e-commerce sites. The usability study tasked real users with finding, evaluating and selecting products matching everyday purchasing tasks.

The 1:1 “think aloud” test protocol was used to test the 19 sites: Amazon, Best Buy, Blue Nile, Chemist Direct, Drugstore.com, eBags, Gilt, Go Outdoors, H&M, IKEA, Macy’s, Newegg, Pixmania, Pottery Barn, REI, Tesco, Toys’R’Us, The Entertainer/TheToyShop.com, and Zappos. Each test subject tested 4 - 8 sites, depending on how fast they were. The duration of each subject’s test session varied between 1 and 1.5 hours, and the subjects were allowed breaks between each site tested.

In order to avoid artificially forcing the subjects to use category navigation on the tested sites, this study was conducted as a combined e-commerce category navigation and search study. This way it was up to the test subjects themselves to decide if they preferred to search or navigate via the categories to find what they were looking for (i.e., they were never asked to use one approach over the other). Furthermore, it allowed the subjects to mix category navigation and search.

During the test sessions the subjects experienced 900+ usability issues specifically related to the homepage, category and main navigation, site taxonomy, category pages, and similar. The issues ranged from minor interruptions to severe misconceptions about the basic premises of how to find products at an e-commerce site, with task completion rates going as low 10-30% when asked to find fairly common products, e.g. a sleeping bag for cold weather, a spring jacket, or a camera with a case.

All of these findings have been analyzed and distilled into 79 specific usability guidelines on homepage and category navigation

Since there will always be site-contextual differences, the aim of this study is not to arrive at statistical conclusions of whether 31.1% or 32.6% of your users will encounter a specific issue. The aim is rather to examine the full breadth of the user’s homepage and category navigational experience, and present the issues which are most likely to cause a poor product finding experience (and consequently a potential loss of sales). And as importantly, to present solutions and design patterns that during testing were verified to resolve or lessen these usability issues.

For a study following the think aloud protocol, the binomial probability formula show that 95% of all usability problems with an occurrence rate of 14% or higher will be discovered on average, with 20 test subjects used.

Benchmarking Methodology

The other part of this research study is a comprehensive usability benchmark. Using the 79 usability guidelines from the large-scale usability tests as the review heuristics and scoring parameters, we’ve subsequently benchmarked the homepage, main drop-down menu, top-level navigation, category pages, product lists and product pages at 50 top grossing US e-commerce sites. This has resulted in a benchmark database with more than 3,950 navigational parameters reviewed, 1,800 additional examples for the 79 guidelines, and 240 navigational page examples from top retailers, each annotated with review notes.

The total UX performance score assigned to each benchmarked sites is essentially an expression of how good (or bad) a homepage, category and navigational experience a first-time user will have at the e-commerce site – based on the 79 guidelines documented in the Homepage & Category Usability report.

The specific score is calculated using a weighted multi-parameter algorithm:

Below is a brief description of the main elements in the algorithm:

  • An individual guideline weight: A combination of the Severity of violating a specific guideline (either Harmful (worst), Disruptive or Interruption, as defined in the usability report), and the Frequency of occurrence of the specific guideline (i.e. how often the test subjects experienced it during the usability study).
  • A Rating describing to which degree a specific site adheres to each guideline (Adhered High, Adhered Low, Neutral, Issue resolved, Violated Low, Violated High, N/A).
  • The scores are summarized for all guidelines, and then divided by the total number of applicable guidelines (to ensure “N/A” does not influence the score).
  • The Highlights marked at the site screenshots are specific examples that the reviewer judged to be of interest to the reader. It’s the site’s overall adherence or violation of a guideline that is used to calculate the site’s usability score. Thus, you may find a specific Highlight that shows an example of how a site adheres to a guideline, even though that same site is scored to violate the guideline (typically because the site violates the guideline at another page), and vice versa.

All site reviews were conducted by Christian Holst, Jamie Appleseed and Thomas Gronne, from October 15th to November 19th 2013 (a 2017 update will released in Q1). A US-based IP address was used. In the case multiple local or language versions of a site existed, the US site version was used for the benchmark.

All reviews were conducted as a new customer would experience them - hence no existing accounts or browsing history were used. The documented and benchmarked designs at each site were: splash pages, homepage, drop-down hover-state, top-level navigation, custom category page, product-list, and product page. One specific path of the navigation hierarchy from each site is shown in the benchmark, but the reviewer investigated 15-30 other pages which were used for the benchmark scoring as well.

Baymard Institute provide this information “as is”. It is based on the reviewers subjective judgement of each site at the time of testing and in relation to the documented guidelines. Baymard Institute can’t be held responsible for any kind of usage or correctness of the provided information.

The screenshots used may contain images and artwork that are both copyright and trademark protected by their respective owners. Baymard Institute does not claim to have ownership of the artwork that might be featured within these screenshots, and solely capture and store the website screenshots in order to provide constructive review and feedback within the topic of web design and web usability.

Citations, images, and paraphrasing may only be published elsewhere in limited extend, and only if crediting “Homepage & Category Usability study by Baymard Institute, baymard.com/research/homepage-and-category-usability

Close overlay