What’s new in Baymard Premium and what’s coming
Baymard currently has conducted 61,000+ hours of large-scale e-commerce UX research. Every year we add another 16,500+ hours of new UX research to Baymard Premium. All of these research findings are available only in the Premium research catalog.
This ‘Roadmap & Changelog’-page describes what’s new in Premium and what new research is coming.
Tip: If you want to get four emails per year summarizing what new research Baymard has just published — then sign up to our Quarterly Newsletter.
Below are some of the most prominent research releases and features we’ve added to Baymard Premium:
All Benchmark Scorecards Show Year-Over-Year UX Performances
All benchmark UX performance scorecards now show historic Year-Over-Year performance comparisons, comparing the site’s current year (the bar chart) with the site’s past UX performances from our past benchmarks. In practice, this means that the benchmark case study pages will now, by default, show you 42,000+ past performance ratings on top of the current year’s performance ratings.
For those interested in the historic overall UX performance landscape within e-commerce, we’ve also made a page with historic versions of the benchmark scatterplot, from 2020 and then dating back to our first checkout UX benchmark performed in 2012.
All New Benchmarks Will Show ‘Implementation Breakdown’ and ‘UX Advice’
All new benchmarks will now feature a much more detailed view of the actual site implementation rated. This used to be information we only captured for internal documentation at Baymard, but we’ve now upgraded our benchmarking systems and process to show this to you. The new implementation overlay (seen above) will show you the actual assessment selections made for each of the 70,000+ UX performances ratings in the Benchmark tool, for the 40,000+ annotation pins in the Page Design tool, and for the 40,000+ Best- and Worst Practice examples found across the Guidelines.
For all new UX benchmarks we publish going forward, this new implementation overlay will provide you with:
The bottom of the implementation overlay will also link to other relevant places in Premium. Notably, it will cross-link all Mobile and Desktop implementation examples with each other and cross-link to all past examples of that guideline from the currently viewed site.
The just-released ‘Product Lists & Filtering’, and the ‘Homepage & Category’ benchmarks already have this extra dataset available.
New 2020 Product Listings & Filtering UX benchmark with 1,500 Examples
We’re releasing a 2020 update for our Product Listings & Filtering UX benchmark, with 25 large e-commerce sites rated across our 95 Product Listings UX guidelines.
This provides you with:
New 2020 Homepage & Category UX benchmark with 1,300 Examples
We’re releasing a 2020 update for our Homepage & Category UX benchmark, with 25 large e-commerce sites rated across our 60 Homepage & Category UX guidelines.
This provides you with:
UX assessment tests embedded at the end of all 700+ guidelines
We’ve just released a new feature that embeds the Review Tool’s assessment interface at the end of all 700+ UX research guidelines:
This means that each UX guideline now ends with a simple interactive UX quiz you can take to see to what degree your website is suffering from the observed UX issues. The tool also provides tailored advice on how to improve your site UX based on the more than 7,000 UI scenarios options found in the Review Tool.
Note that this embedded test is just a “sandbox” – so your selections do not get saved. If you want to perform a full UX assessment (across 1 or all 700 guidelines) and get a UX performance scorecard, use the Review Tool instead.
New 2020 Mobile UX benchmark with 6,300 Examples
We’re releasing another round of Mobile E-Commerce UX benchmarking in 2020 (publishing one in February 2020 and another in June 2020), with 55 large e-commerce sites rated across up to 220 of our mobile UX guidelines (form the new mobile v2 research study).
This provides you with 6,300+ new worst- and best-practice Mobile UX examples (3,300 more than in the February release), and 8,700+ new Mobile UX performance scores (5,100 more than in the February release).
You can view the Mobile E-Commerce UX performance and case-study dataset in the Benchmark tool, and you can see the 800+ new mobile design examples from leading e-commerce sites in the Page Design tool.
The 6,300+ worst- and best-practice examples are already embedded in the 200+ mobile UX guidelines.
New ‘Industry Views’ for grocery-, digital-, configurator-, and ‘physical store’-sites
We’ve expanded the existing “Guidelines by Industry” feature with 4 new collections:
If you work on an e-commerce site within any of these industries, we recommend that you start examining the 50-150 guidelines listed within the corresponding industry view page before diving into the full 700+ guidelines available in Baymard Premium.
We’ve created these industry lists by analyzing the UX performance ratings from our UX our past audits and using our benchmark dataset, looking specifically for which of the 700+ guidelines that — within each industry — have the largest collective negative UX performance impact. In other words, guidelines that most sites within the industry violate despite the guideline having a relatively high UX impact seen during usability testing.
New ‘Review Tool’ with 7,000+ Scenarios Based on 9 Years of Auditing Experience
Most of Baymard’s developer and design teams have spent close to one full year building a completely new version of the Review Tool available in Baymard Premium.
The Review Tool allows you to self-assess your own site(s), competitor sites, prototypes, and client websites, against any of the 700+ guidelines available in Baymard Premium. This enables you to:
This new Review Tool builds upon our audit team’s nine years of experience conducting highly detailed UX audits, and have more than 7,000 site implementation scenarios available (or “UI patterns” if you will), that are each traced back to specific findings among Baymard’s 54,000+ hours of UX research, to determine the UX performance output. This enables you to create highly accurate and highly repeatable heuristic evaluations at levels that are otherwise unseen in the UX industry.
Try the Review Tool, or start by watching the below video intro:
‘Guidelines by Type’: see all guidelines that are ‘Quick Wins’, ‘UX Basics’, etc.
You now have a new way of navigating the 700+ research guidelines called “Guidelines by Type”.
This feature shows you different collections of guidelines that all have the same attribute.
The “Guidelines by Type” collections available are: Quick Wins, Missed Opportunity, Web Convention, ‘COVID-19’ Related, UX Basics. (More will be added later.)
The new guideline collections are good if you are interested in a particular type of guidelines, like “Low Cost/Quick Wins” or “UX Basics”.
New feature to see ‘Your Research Updates’
We’ve improved the “Mark as Read” feature in two ways:
Tip: we strongly recommend that you use the “Mark as Read” button on all research content you’ve viewed – it’s the only way to keep track of your progress, to avoid duplicate reading, and to get notified about updates to the research you’ve read.
COVID-19 Relevant Guidelines
COVID-19 Relevant Guidelines we’ve assembled a list of the 58 existing Baymard UX guidelines that are the most relevant to consider in these COVID-19 times, covering user concerns such as like product availability, shipping, etc. - see it here
While Baymard haven’t done testing specifically during the COVID-19 crisis (as large-scale usability research simply takes too long to respond), the informational challenges users are facing right now. The list covers Baymard’s guidelines that relate to users’ general concerns over product availability, shipping speed, order returns, customer support, and similar.
New 2020 Mobile UX benchmark with 2,700 Examples
We’re releasing our new Mobile E-Commerce UX benchmark dataset. This provides you with 30 large e-commerce sites rated across the 120 new mobile UX guidelines (form the new mobile v2 research study), leading to 2,700+ new worst- and best-practice UX examples, and 3,600+ new UX performance scores.
You can view the Mobile E-Commerce UX performance and case-study dataset in the Benchmark tool and you can see the 400+ new mobile design examples from leading e-commerce sites in the Page Design tool.
The 2,700+ worst- and best-practice examples are already embedded in the 120 mobile UX guidelines.
New Mobile E-Commerce UX Study
We’re releasing our new large-scale Mobile E-Commerce UX study — the most comprehensive research study we’ve ever conducted. The mobile users encountered a massive 2,597 usability issues during testing, which we’ve distilled into 170+ new mobile e-commerce UX guidelines. These guidelines will reveal new mobile user behavior specific to e-commerce, where and how mobile websites need to deviate from desktop, which mobile design patterns consistenly cause issues and which perform well, and how to create a highly optimized mobile shopping experience. The new mobile guidelines also reveal which of the current approximately 500 “desktop” guidelines found in Baymard Premium that are actually “platform agnostic” and apply equally to both desktop and mobile sites.
Go see the theme overview page for the new extensive mobile UX study.
Note: The old mobile v1 guidelines are removed from the guideline navigation, but deep links to specific guidelines still work. The Review Tool will for newly created reviews use the new mobile v2 guidelines. Any reviews created before Feb 27 2020 will use the mobile v1 guidelines.
New 2020 Quantitative Research on 14 Different Topics
New quantitative 2020 research, providing you with the newest stats on user preferences and behavior across 14 topics, like: why users abandon during the checkout flow (see Checkout intro), why users shop online instead of in physical stores (see Product Page intro), mobile app install rates, utilization rates, and top 5 installed apps (see #930), what information types which will make users abandon the checkout (#731), which SSL and trust badges that give users the best sense of trust in a checkout flow (#587), how much more users rely on customer review averages if they have a higher number of votes (#517), and how users actually “save” products they aren’t yet ready to purchase (#798), frequency of order returns (here), why users unsubscribe from newsletters (here), how often users get new credit cards (#919), and more.
New UX performance graphs and improved Review Tool
After extensive development work, we’re excited to release a set of completely reworked UX performance graphs. The graph contains both improved UX scoring logic, improved visuals, and a new highly advanced prediction logic for unrated guidelines. The latter greatly improves how you can use the Review Tool right now, but also bring significant speed improvements to Baymard’s UX benchmarking going forward.
New prediction logic: the new UX performance graphs now include an advanced prediction logic for all unrated guidelines. This prediction logic uses the more than 57,000 UX performance ratings in Baymard’s benchmark database, to predict the score for unrated guidelines with a 95% accuracy. This means that the scorecards you create yourself in the Review Tool no longer need almost all guidelines rated before you get a UX scorecard - you can now rate just a small handful of guidelines to get a usable UX performance scorecard.
New UX scoring logic: the scoring logic itself is greatly improved, with better handling of guidelines that are not applicable to a site and with greatly improved “benchmark normalization” where all performance scores are adjusted on an ongoing basis depending on how the e-commerce industry as a whole evolves and depending how users’ expectations increase over time.
Improved Visuals: the graphs now include a sticky header and footer, there’s now gridlines to show the UX performance levels (“poor”, “mediocre” etc.), we’ve switched to a “swarm plot” instead of a “scatter plot” (meaning dots for sites with same performance will no longer collide).
New custom graph views for UX Audits: lastly, the UX performance graphs now support custom view structures. This greatly improves Baymard’s UX audit performance scorecards, where the audit scorecards can now be even more tailored to the specific client industry.
Normalized numbers: The UX performance score numbers are now also normalized on a 0-100 scale. This makes the performance numbers easier to interpret and makes them fully comparable across Theme, Topics, and scorecards. (Note the score range have two special cases: “-100 to 0” scores for “Broken” UX performances, and “101 to 200” for “State of The Art” performances).
New Product Page UX benchmark with 3,700 Examples
We’re releasing our new Product Page UX benchmark dataset. This provides you with 59 large e-commerce sites rated across the 95 Product Page UX guidelines, leading to 3,700+ new worst- and best-practice UX examples, and 5,600+ new UX performance scores.
You can view the Product Page UX performance and case-study dataset in the Benchmark tool and you can see the 265 new product page design examples from leading e-com sites at in the Page Design tool.
The 3,700+ worst- and best-practice examples are already embedded in the 95 product page UX guidelines. We recommend you start with the ‘Current State of Product Page UX’ analysis.
New Search UX benchmark
We’re releasing our new E-commerce Search UX benchmark dataset. This provides you with 59 large e-commerce sites rated across the 50 Search UX guidelines, leading to 1667 worst- and best-practice Search UX examples, and 2800+ new Search UX performance scores.
You can view the Search UX performance and case-study dataset in the Benchmark tool and you can see the 237 new search design examples from leading e-com sites at in the Page Design tool.
The 1600+ worst- and best-practice examples are already embedded in the 50 search UX guidelines. We recommend you start with the ‘Current State of E-Commerce Search UX’ analysis.)
New Late-2019 Quantitative Research on 9 Different Topics
Nine rounds of new quantitative research, providing you with new late-2019 stats on user preferences and behavior on things like: why users abandon their order during the checkout flow (see Checkout intro), why users shop online instead of in physical stores (see Product Page intro), mobile app install rates, utilization rates, and top 5 installed apps (see #930), what information types which will make users abandon the checkout (#731), which SSL and trust badges that give users the best sense of trust in a checkout flow (#587), how much more users rely on customer review averages if they have a higher number of votes (#517), and how users actually “save” products they aren’t yet ready to purchase (#798).
New Mobile Study (In Progress): 141 Mobile Guidelines Published So Far
The research team is currently conducting a new large-scale Mobile E-Commerce UX study — the most comprehensive research study we’ve ever conducted. The mobile users encountered a massive 2,597 usability issues during testing, revealing new mobile user behavior specific to e-commerce, where and how mobile websites need to deviate from desktop, and which of the current approximately 500 “desktop” guidelines found in Baymard Premium are actually “platform agnostic” and apply equally to both desktop and mobile sites.
Timeline:
‘Industry’ Feature: See the 50–150 Most Important Guidelines to Your Industry
You now have a new way of navigating the 700+ research guidelines called “Guidelines by Industry”.
This feature shows you the 50-150 most important guidelines to read first for a single industry (e.g. for apparel sites, mass merchant sites, etc.) and can be found next to the other research Themes in the Guideline tool.
The industry views available are: Apparel, Mass Merchant, Electronics & Office, Home & Hardware, Houseware & Furnishings, Health & Beauty, Sports, Toys, & Hobbies, and Business-to-Business. (More will be added later.)
If you work on an e-commerce site within any of these industries, we recommend that you start examining the 50-150 guidelines listed within the corresponding industry view page before diving into the full 700+ guidelines available in Baymard Premium.
We’ve created these industry lists by analyzing the 57,000+ UX performance ratings from our UX benchmark dataset, looking specifically for which of the 700+ guidelines that — within each industry — have the largest collective negative UX performance impact. In other words, guidelines that most sites within the industry violate despite the guideline having a relatively high UX impact seen during usability testing.
The industry view is meant as a starting point. Any given site within these industries should still pay attention to all the 700+ UX guidelines in Baymard Premium. To make that task more manageable, those 50-150 guidelines found within the “Guidelines by Industry” list will for most sites (within those respective industries) be the best starting point for improving their overall UX.
New Cart & Checkout UX Benchmark Added with 5,000+ New Examples
We’ve added a new Cart & Checkout benchmark with 59 sites benchmarked. This provides you with:
A good starting point for getting an overview of the overarching insights from this new vast dataset would be to read the updated Cart & Checkout UX benchmark analysis.
‘Compare Mode’ Added to Scorecards
The UX performance charts now have two features to make performance comparison easier:
Select Sites for Comparison — All scorecards now have an actual “Compare” mode that enables you to load specific scorecards into your current bar chart for comparison. This allows you to make direct performance comparisons between: any Review Tool scorecards you’ve created yourself, any UX audit performance scorecards Baymard have conducted for you, and any of the 60 case studies from the Benchmarks. You can enter this comparison mode by clicking the “Compare” link found in lower right corner of each bar chart graph.
Superimposed Benchmark Performances — All bar charts now have the entire benchmark scatter plot superimposed as small grey dots below the bars. This means that in all UX performance bar charts you now get 39,500+ UX performance ratings (from the top 60 grossing e-commerce sites in the world) summarized right next to the site performance you’re currently viewing, providing much better context for the bar chart performance.
Search Tool Added to Premium
You can now search within Baymard Premium and Baymard.com.
You’ll find the new search tool in the upper right-hand corner of the main navigation, and it allows filtering by both Research Theme and Content Type.
‘Importance’ Added to All Guidelines
The beginning of all guidelines in Baymard Premium now lists an “Importance” sentence that describes overall importance of reading a guideline in three levels; “Detail”, “Impactful” and “Essential”. This makes it easier to immediately understand the relative importance of a guideline before reading it.
The importance is mostly based on how much of an impact we observe the underlying UX issue have on end-users. This part is the “Severity” and “Frequency” data from our large scale test sessions we’ve now combined into a single “Importance” sentence. Generally, all of the 74 research topics list the guidelines by their importance — so those guidelines found first within a topic are always more important to read than those at the end of the topic. (In some instances guideline sequence within topics deviate slightly from this if we deem that it vastly improves the reading experience of the whole topic.)
Select guidelines are also tagged as being a “Missed Opportunity”, “Web Convention”, and/or “Low Cost”:
If a guideline meets any of these three requirements, the guideline Importance level is boosted beyond its “severity” and “frequency” test values — as we then deem it more important to read.
Account & Self-Service Benchmark Added With 2,000+ Examples
We’ve finished the Accounts & Self-Service benchmark with 42 sites benchmarked across the 65 Accounts & Self-Service guidelines.
This provides you with a total of 2,000+ implementation examples for the 65 guidelines, 2,700+ UX performance ratings in the benchmark, and 330 annotated page design examples available in the Page Design tool.
Compared to the other overall themes of e-commerce UX, Accounts & Self-Service is the overall theme with by far the fewest decently or well-performing sites, and the theme with the second highest amount of mediocre to poor performances (only surpassed by e-commerce Search UX).
We recommend that you read the “Accounts & Self-Service benchmark analysis” as it provides a good overview of the research and highlights commonly missed opportunities relating to the account management and the order tracking & returns experience.
‘Current State of the E-Commerce UX Landscape’ Analysis Added
We’ve added a high-level analysis of the “current state” of the e-commerce landscape as a whole (from a UX performance point of view), based on the 40,000+ current UX performance scores in our benchmark database, which cover 60 major e-commerce sites across multiple industries.
We strongly recommend this for those wanting the complete overview of the e-commerce UX industry and our overarching research findings, before diving into the specific themes, topics, and guidelines. For this reason, it’s also linked to from the top of the Guideline page and the Benchmark page.
Read our new ‘Current State of the E-Commerce UX Landscape’ analysis now.
Exam & Certification: Become a “Certified UX Professional by Baymard Institute”
All Premium users on the Medium and Large accounts can now in the Exam & Certification tool take a series of exams and — if passing all six exams — become a “Certified E-Commerce UX Professional by Baymard Institute”.
Certification also includes an (optional) profile page in Baymard Institute’s public list of “Certified E-Commerce UX Professionals” - certifying your high level of knowledge and expertise specifically within e-commerce User Experience to clients, partners, and current or future employees.
Each year we conduct 16,500+ hours of new UX e-commerce research for Baymard Premium.
Here’s the roadmap of upcoming UX research (end 2020 to Q2 2021):
How often are the research findings updated?
For Baymard Premium we conduct approximately 16,500 hours of new UX e-commerce research per year. How many new findings that yields depend entirely on the topics we research and what kind of evolution we see in user behavior (some e-commerce aspects change faster than others).
But from experience, we know that each year we will:
Important to note: generally, for usability research, there’s a big difference between the decay of aesthetic web style trends and the decay of actual user behavior. While style trends and aesthetic preferences change quickly, often every year, user behavior and the underlying causes of usability issues change much more slowly; for example, quite some users still double click online even though that hasn’t been relevant for 2 decades. That’s also why one of our audit clients saw a $10 million increase in annual sales from just a single checkout change, based on a checkout guideline dating back to our very first checkout study from 2010. At Baymard we, therefore, tend to go for less frequent, but more in-depth, study updates (tied to changes in actual user behavior) rather than superficial refreshes (on style trends).
You can see the complete list of all new UX research and features added to Baymard Premium in the changelog list below:
(January 2020) Guideline [Guideline #290] updated to version 2.0 with new mobile test findings and is now ‘platform agnostic’.
(July 2019) Scorecard comparison: we’ve extended the scorecard comparison feature to now also show comparison data at the guideline level. This works in both the Case Study and the Review Tool scorecards. This will show you the exact guideline compliance for all sites selected for comparison.