Lo-fi Experiment for a Big Box Retailer

Background

A sporting goods retailer was losing a considerable amount of money each year on pricing errors. There were many underlying reasons for these errors, but some of the biggest losses in revenue had come from manual input errors. This was prevalent because nearly all parts of the pricing process were done manually.

The Team

I was one of two designers on the team. I worked closely with a junior designer from the client, 2 PMs, and 4 developers. The designer had just been hired but the client PM and developers had been working on pricing at this retailer for years.

Exploratory Research

We had been tasked with fixing all of pricing. This was a huge area of the company - hundreds of employees were attached to pricing in some form. We focused in on promotional pricing errors specifically, because it was suspected to be the biggest pricing vulnerability. We debated whether to start with pricing accuracy (optimizing what the price should be) vs. pricing entry accuracy (making sure the intended price is displayed to the buyer). It was difficult choosing which one, but we ultimately went with pricing entry accuracy because we felt that pricing optimization would be moot unless the intended price could be accurately sent to the end buyer.

The basis of our research was the promotion process. When running a promotion, the price for an item had to pass through the following steps:

  1. Assistant buyer proposes a series of price promotions based on seasonality and whether an item has been selling at target.

  2. Buyer approves the promotion and sends it to the data entry assistant.

  3. Data entry assistant enters the price into a legacy pricing system.

  4. The legacy pricing system passes that price on to a pricing coordinator who checks to see if the price violates any obvious rules (too high of a discount, being featured in two promotions at once, selling for lower online than in-store, etc.)

  5. Pricing coordinator approves the price and passes it on to the ads and website management groups.

  6. Price gets printed out by signage coordinator in the physical store.

  7. Price gets displayed on the website.

So many of these steps were manual and prone to error. We shadowed people in all of the roles above to see what kind of errors were likely. We also asked for data regarding some of the biggest revenue losses. Unfortunately, the losses were not tracked very well, so we had to rely on anecdotal evidence as to the biggest losses that people remembered, which meant we were biasing towards the more spectacular one-time losses and not the small papercuts that could be occurring on a regular basis and perhaps causing an equivalent or higher loss overall.

The proposed solutions

We came up with a number of ways to tackle promotional errors. We went through a couple of iterations of generating solution ideas before settling on two concepts that we wanted to test with users.

  1. A scheduling tool that would allow a buyer to see what promotions and seasonal price cuts were coming up for an item, thereby allowing them to plan further into the future and giving them visibility into promotional overlaps and too-aggressive discounts

2. A price error catching tool that would allow a buyer to view a list of potential pricing errors when setting up a promotion so that they could fix any mistakes before the promotion went out to downstream departments.

We worked quickly, coming up with a wireframe prototype of each solution that same day and testing it the day after. The team divided up to talk to enough people so that we could establish patterns for each. We decided to go with the error catching tool because we had more confidence after testing that it would surface more of the pricing errors, based on our initial prototype testing.

At this point, we decided to switch user groups, and focus on price coordinators instead of buyers. The reason we did so is because we found that mistakes were spread out across buying groups and we wouldn’t get enough volume of information from following one buying group. The price coordinator team however, was made up of 8 people who were responsible for catching pricing mistakes for the entire company. We thought that was a good opportunity to gather data on the types of errors that were occurring most frequently as while still catching errors before they reached the end consumer. That pricing error information could then be used at a later date to inform solutions for other parts of the pricing process.

Learning from working software

We built a version of the tool in a matter of weeks. The hardest part was for the developers to wrangle all of the product information we needed from disparate legacy systems. After we built enough of the software to start pulling in some potential pricing errors, we started showing the results to the pricing coordinators to get their feedback. We kept the styling of the tool relatively simple, since the focus was on data quality. Our main focus was on readability and condensing a large amount of information into a single row.

We caught hundreds of errors over the Black Friday promotional cycle, and we were able to reduce the amount of time it took for a price coordinator to review their promotions for the day from 1 hour to 10 minutes!

Marvel Prototype for My First Project