Author’s Note: This blog was originally posted on March 30, 2011, on a company Facebook page. Now that I have a WordPress presence, I thought I’d post it here. Plus, there’s some good news at the end.
This blog posting looks at some recent examples of usability testing of vendor products, the challenges and resistance we faced from the vendor, and the lessons we have learned about why it’s valuable to conduct vendor testing while the project stakeholders observe live.
I’m continually amazed at just how poor the customer experiences are with many of the vendor products that we purchase and integrate with our customer web sites. In the last week alone, I’ve had two discussions with such vendors about how to improve their user interfaces to enhance the usability of their product. And some of the user experience violations are seriously UI Design 101 kinds of problems.
Take, for instance, our bill payment vendor. Their new rollout contains a page where all the labels are bold and left aligned. Great. But then on the next page, the labels are plain text and right aligned. Why? Underlined links that say “click here,” step-by-step visual indicators that look like big juicy clickable buttons (when in fact, they are not). And on and on. As dutiful User Experience practitioners, we test these vendor designs and share the feedback. In the bill payment case, they were completely grateful and somewhat abashed.
“We admittedly didn’t pay as much attention to usability as we should have.”
“Thank you for doing our work for us.”
Now, let’s take the example of vendor who provides us with various online energy calculators. A couple weeks back, we conducted 1-on-1 usability testing with customers on a new online energy calculator tool. While the customers appreciated the idea of a tool like this (they all had a bent toward energy savings), the experience was so rife with usability problems — including four key “showstoppers” — that the project team felt strong reluctance to release this to our customers without the vendor fixing the problems.
Without going into all the usability details (which would be another topic in and of itself), three of the very major problems were:
- A poorly designed interface that included a large, blue “Calculate” button that nearly every customer clicked prematurely, before filling out information in nine other sections of a profile.
- Text that was so small that customers constantly had to lean in to read it. We have video evidence of one customer leaning in so far, the top of his head was no longer in the camera view.
- Poorly designed navigation that prevented customers from moving between the various components of the online checkup.
There were dozens of other issues as well, caused by basic violations of UI Design 101.
We prepared a big, colorful report, full of screen captured images and with links to video evidence of customers struggling all throughout the experience. We sent that to the vendor. Then, yesterday, we had the big conference call where we went over all of the results.
And we heard the typical shpiel back about how they need to talk to the product team, how they have 400 other customers using this tool, how these are not insignificant recommendations. And that’s fine. We understand that you can’t just redesign entire experiences by pushing a button. We expect small changes to be made now, and maybe bigger things to come later. We constantly work with clients and vendors to come up with creative alternatives.
It was all going really swell until near the end, when one of the vendor’s project people offered up the following tidbit regarding our Showstopper #1, the big, blue “Calculate” button:
“The reason that the button has become so prominent over time is that our customers [other utilities] only care about having customers calculate the report. If we change the calculate button, they worry about completion percentage. They have to report to their commissions about how many customers complete energy saving activities. So we might have a big problem with making a change there.”
So, they’re telling us that despite the fact that customers are inadvertently clicking the big, blue, dominant button prematurely — without filling out all the other necessary information about their home — it’s better for the utilities that people submit incomplete reports and get inaccurate usage information, inaccurate energy savings plans, and inaccurate savings dollar amounts (all while they’re happily oblivious to this fact).
Somehow I feel that if the commissions knew that a decent percentage of the customers who calculate a home energy audit are doing so with inaccurate information based on a poorly designed experience, they might be less interested in the quantity of submitted reports than with making sure the quality of the submitted reports is very high. I mean, isn’t the goal here to empower customers with really accurate information about their home energy use so that they can make meaningful and relevant changes to their behaviors? If because they submitted an incomplete home profile we’re telling them they can save $500 a year by weatherproofing their home — whereas in reality, maybe they could really only save $50 because their home is actually pretty well weatherproofed, but the poor interface precluded them from actually telling us that.
And by the way, they should worry about completion percentage anyway. They have 10 sections that customers have to fill out. If most ignorantly click Calculate after only section one is finished, the completion rate is high, but the completeness is low. If you put the Calculate button at the end of section 10, yes, you are likely to have drop-offs as users trudge their way through the sections. But for those who do accurately complete everything, the rewards are high.
It perplexes me that other utilities integrate such vendor products into their sites without conducting their own user experience evaluations. Maybe all they really do care about is how many people push the button. But something tells me that if they observed their own customers struggling with these products, they might be way more reluctant to blindly push out these tools and interfaces. Maybe I should convince more of my peers at fellow utilities to pound on their vendor products. Maybe as a collective whole, we can apply enough pressure to convince vendors that the user experience really does matter, and not just the numbers of completed payments or completed reports.
In any case, the work that we have done has a lot of people at work a little anxious now. Because the project stakeholders observed the tests, they saw first-hand how the customers were struggling. The stakeholders squirm in their seats. They think about upcoming rate cases, negative PR, unhappy customer sentiment. They think about an impending launch of a tool on their web site that customers will have problems with. They don’t like it.
And I suppose besides improving the experiences for our customers, that’s the other real value in testing with vendors — getting the project stakeholders to share in those “aha!” moments. Getting them on board with usability testing. Making them uncomfortable with what they see. So that they are our allies. They provide the pressure back to the vendor. Maybe the cache that comes with being a big utility can help influence vendor to more readily make changes.
Really, in the end, I’m still baffled that these companies sell millions of dollars worth of products — and we continue to be the ones making design and experience recommendations to them.
“Thank you for doing our work for us.”
Sure. We’ll send you our invoice.
Author’s Post Script: Since the original publication of this article, the vendor has agreed to fix our four usability showstoppers — even mostly using our redesigned interfaces we sent them. This was great news to learn, and we feel the resulting product will be much more usable and useful.