This study examines the user’s homepage and category navigation experience. It specifically looks how users navigate, find and select products on e-commerce sites. This includes areas such as pages and design elements include the homepage, category navigation, sub-categories, and product lists.
This Homepage & Category usability study is based on two main research components:
Below the methodology for each of the research methods is described in detail.
To purchase access to the Homepage & Category Usability Report & Benchmark go to: baymard.com/homepage-and-category-usability
One part of this research is based on a large-scale usability study of 19 major e-commerce sites. The usability study tasked real users with finding, evaluating and selecting products matching everyday purchasing tasks.
The 1:1 “think aloud” test protocol was used to test the 19 sites: Amazon, Best Buy, Blue Nile, Chemist Direct, Drugstore.com, eBags, Gilt, Go Outdoors, H&M, IKEA, Macy’s, Newegg, Pixmania, Pottery Barn, REI, Tesco, Toys’R’Us, The Entertainer/TheToyShop.com, and Zappos. Each test subject tested 4 - 8 sites, depending on how fast they were. The duration of each subject’s test session varied between 1 and 1.5 hours, and the subjects were allowed breaks between each site tested.
In order to avoid artificially forcing the subjects to use category navigation on the tested sites, this study was conducted as a combined e-commerce category navigation and search study. This way it was up to the test subjects themselves to decide if they preferred to search or navigate via the categories to find what they were looking for (i.e., they were never asked to use one approach over the other). Furthermore, it allowed the subjects to mix category navigation and search.
During the test sessions the subjects experienced 900+ usability issues specifically related to the homepage, category and main navigation, site taxonomy, category pages, and similar. The issues ranged from minor interruptions to severe misconceptions about the basic premises of how to find products at an e-commerce site, with task completion rates going as low 10-30% when asked to find fairly common products, e.g. a sleeping bag for cold weather, a spring jacket, or a camera with a case.
All of these findings have been analyzed and distilled into 79 specific usability guidelines on homepage and category navigation
Since there will always be site-contextual differences, the aim of this study is not to arrive at statistical conclusions of whether 31.1% or 32.6% of your users will encounter a specific issue. The aim is rather to examine the full breadth of the user’s homepage and category navigational experience, and present the issues which are most likely to cause a poor product finding experience (and consequently a potential loss of sales). And as importantly, to present solutions and design patterns that during testing were verified to resolve or lessen these usability issues.
For a study following the think aloud protocol, the binomial probability formula show that 95% of all usability problems with an occurrence rate of 14% or higher will be discovered on average, with 20 test subjects used.
The other part of this research study is a comprehensive usability benchmark. Using the 79 usability guidelines from the large-scale usability tests as the review heuristics and scoring parameters, we’ve subsequently benchmarked the homepage, main drop-down menu, top-level navigation, category pages, product lists and product pages at 50 top grossing US e-commerce sites. This has resulted in a benchmark database with more than 3,950 navigational parameters reviewed, 1,800 additional examples for the 79 guidelines, and 240 navigational page examples from top retailers, each annotated with review notes.
The total UX performance score assigned to each benchmarked sites is essentially an expression of how good (or bad) a homepage, category and navigational experience a first-time user will have at the e-commerce site – based on the 79 guidelines documented in the Homepage & Category Usability report.
The specific score is calculated using a weighted multi-parameter algorithm:
Below is a brief description of the main elements in the algorithm:
All site reviews were conducted by Christian Holst, Jamie Appleseed and Thomas Gronne, from October 15th to November 19th 2013 (a 2017 update will released in Q1). A US-based IP address was used. In the case multiple local or language versions of a site existed, the US site version was used for the benchmark.
All reviews were conducted as a new customer would experience them - hence no existing accounts or browsing history were used. The documented and benchmarked designs at each site were: splash pages, homepage, drop-down hover-state, top-level navigation, custom category page, product-list, and product page. One specific path of the navigation hierarchy from each site is shown in the benchmark, but the reviewer investigated 15-30 other pages which were used for the benchmark scoring as well.
Baymard Institute provide this information “as is”. It is based on the reviewers subjective judgement of each site at the time of testing and in relation to the documented guidelines. Baymard Institute can’t be held responsible for any kind of usage or correctness of the provided information.
The screenshots used may contain images and artwork that are both copyright and trademark protected by their respective owners. Baymard Institute does not claim to have ownership of the artwork that might be featured within these screenshots, and solely capture and store the website screenshots in order to provide constructive review and feedback within the topic of web design and web usability.
Citations, images, and paraphrasing may only be published elsewhere in limited extend, and only if crediting “Homepage & Category Usability study by Baymard Institute, baymard.com/research/homepage-and-category-usability”