Checkout Usability Study Methodology

This study examines the user’s checkout experience. It specifically looks at how sites can improve the shopping cart and checkout flow to ensure fewer users abandon their purchase. This study includes areas such the shopping cart, users privacy concerns, form field usability, the flow and layout of checkout pages, 3rd party payment, validation errors, etc.

This Checkout usability study is based on two main research components:

  1. Multiple rounds of large-scale checkout usability testing (1:1 think-aloud user testing at 15 leading e-commerce sites) leading to 63 usability guidelines described in the Checkout usability report, and
  2. Benchmarking of 50 leading US e-commerce sites, using the 63 Checkout usability guidelines, as the benchmark heuristics and scoring parameters.

Below the methodology for each of the research methods is described in detail.

To purchase access to the Checkout Usability Report & Benchmark go to: baymard.com/checkout-usability

Usability Testing Methodology

One part of this research is based on a large-scale usability study of 15 major e-commerce sites. The usability study tasked real users completing the purchase for multiple different types of products – all the way from “shopping cart” to “completed sale”.

The 1:1 “think aloud” test protocol was used to test the 15 sites: 1-800-Flowers, AllPosters, American Apparel, Amnesty Shop, Apple, HobbyTron, Levi’s, NewEgg, Nordstrom, Oakley, Perfume.com, PetSmart, Thomann, Walmart, and Zappos. Each test subject tested 6 - 11 sites], depending on how fast they were. The duration of each subject’s test session varied between 1 and 2 hours, and the subjects were allowed breaks between each site tested.

During the test sessions the subjects experienced 500+ usability issues specifically related to the checkout flow and design. All of these findings have been analyzed and distilled into 63 specific usability guidelines on how to design and structure the best possible performing checkout flow.

Since there will always be contextual site differences, the aim of this study is not to arrive at statistical conclusions of whether 61% or 62.3% of your users will encounter a specific issue. The aim is rather to examine the full breadth of the user’s checkout behavior, and present the issues which are most likely to cause checkout abandonments. And as importantly, to present the solutions and checkout design patterns that during testing were verified to cause a high performing checkout flow.

Benchmarking Methodology

The other part of this research study is a comprehensive usability benchmark. Using the 63 checkout usability guidelines from the large-scale usability tests as the review heuristics and scoring parameters, we’ve subsequently benchmarked the checkout flow at 100 top grossing US e-commerce sites. This has resulted in a benchmark database with more than 6,300 checkout usability parameters reviewed, 3,000 additional examples for the 63 guidelines, and 500 checkout page examples from top retailers, each annotated with review notes.

The total UX performance score assigned to each benchmarked sites is essentially an expression of how good (or bad) a checkout user experience a first-time user will have at the e-commerce site – based on the 63 guidelines documented in the Checkout Usability report.

The specific score is calculated using a weighted multi-parameter algorithm:

Below is a brief description of the main elements in the algorithm:

  • An individual guideline weight: A combination of the Severity of violating a specific guideline (either Harmful (worst), Disruptive or Interruption, as defined in the usability report), and the Frequency of occurrence of the specific guideline (i.e. how often the test subjects experienced it during the usability study).
  • A Rating describing to which degree a specific site adheres to each guideline (Adhered High, Adhered Low, Neutral, Issue resolved, Violated Low, Violated High, N/A).
  • The scores are summarized for all guidelines, and then divided by the total number of applicable guidelines (to ensure “N/A” does not influence the score).
  • The Highlights marked at the site screenshots are specific examples that the reviewer judged to be of interest to the reader. It’s the site’s overall adherence or violation of a guideline that is used to calculate the site’s usability score. Thus, you may find a specific Highlight that shows an example of how a site adheres to a guideline, even though that same site is scored to violate the guideline (typically because the site violates the guideline at another page), and vice versa.

All site reviews were conducted by Christian Holst and Jamie Appleseed, from March 5. to April 8. 2012 (an updated version will be published in Q3 2016). A US-based IP address was used. In the case multiple local or language versions of a site existed, the US site version was used for the benchmark.

All reviews were conducted as a new customer would experience them - hence no existing accounts or browsing history were used. The shortest path through the checkout was always the one benchmarked (e.g. the “guest checkout” option). The documented and benchmarked designs at each site were: shopping cart, account selection, shipping address, shipping methods, billing address, payment methods, order review and order confirmation steps.

Baymard Institute provide this information “as is”. It is based on the reviewers subjective judgement of each site at the time of testing and in relation to the documented guidelines. Baymard Institute can’t be held responsible for any kind of usage or correctness of the provided information.

The screenshots used may contain images and artwork that are both copyright and trademark protected by their respective owners. Baymard Institute does not claim to have ownership of the artwork that might be featured within these screenshots, and solely capture and store the website screenshots in order to provide constructive review and feedback within the topic of web design and web usability.

Citations, images, and paraphrasing may only be published elsewhere in limited extend, and only if crediting “Checkout Usability study by Baymard Institute, baymard.com/research/checkout-usability

Close overlay