Using Remote Research to Get Actionable Insights

Imagine—the economy is in recession, budgets are t-i-g-h-t, yet you still need to provide recommendations that are based on more than a hunch. What’s a digital agency to do? Thank goodness for that word, “digital,” as the advent of new technologies has made it possible to get actionable insights at a lower cost, often with even better data quality. This will be the first of several posts on how we at ZAAZ are using Remote Research methods to get great insight and ensure that the Voice of the Customer informs our recommendations.

Would the Best Performing Label Please Stand Up

In this post we’ll look at a project we did for our long-time client, and one of the largest Credit Unions the nation, BECU (Boeing Employees’ Credit Union).

The project was designed to answer just one important question, “What is the best navigation label for people who are looking for Internet and mobile banking options?”

We created five different designs of the BECU homepage, each with a different label or label placement, and hosted them on a staging server. The alternative labels were:
  •  Remote Account Access
  •  Mobile & Online Banking in the primary and global navigation
  • Remote Banking
  • Online Banking

We then asked five groups of 30 people per group to accomplish the same task, each using a different version of the label on the home page. The task was:

“Use the site to find how to view your banking information using your Internet-enabled cell phone.”
Four follow-up questions were asked.  They were:
  1. How difficult was it to complete this task?
  2. How certain did you feel that the name of the section would take you to the information you were looking for?
  3. What specific information would you expect to see when you click on the name of the section?
  4. What, if anything, would be a better name or label for this information?
After only 48 hours, the results showed that ALL the suggested labels performed better than the original label (“Account Access”). And the study suggested a winning label, “Mobile & Online Banking”, as well.




Remote Research helped answer a key navigation question using more than an educated guess or emotionally-driven opinions. And because it was an unmoderated study (e.g., participants completed the study without a researcher present), and the tool did most of the analysis, we were able to find the best solution very quickly.

Horizontal vs. Vertical Filter Control

I'm currently moderating a RITE study at my company for a top 10 mutual fund firm. Part of our goal for the site is to make it easy for people to quickly find a set of relevant funds. For the first iteration we employed a filter tool horizontally above the results set like so:



We were concerned that people wouldn't know that the filter options were cumulative, or that they'd be thrown off when the results changed on every click of a filter. However, users told us something a bit more obvious. That is, the results set was partially below the fold so it was cumbersome to view a new results set.

Bring in the beauty of the RITE study where we could try a whole new approach. We put the filter options on the left site of the page like so:



Users had no problem using this version. Because the client was there all along, we didn't have to convince anyone that this approach was best. The users proved it for us.

Excel for Mac Dialogue Box

I'm a fan of the Mac Business Unit at Microsoft. (I've worked with them through my employer in the past.) Most often they do great work.

But this dialogue box made me laugh (after I got over the fear of clicking OK). Someone couldn't find the "Cancel" button apparently.

Can You Please Verify The "Brand"?

You probably know the story.

A client has done the right thing and asked for a usability study to verify that the prototype is easy to use. You define the audience and write the recruitment screener, start recruiting and write the study protocol. Then, right before the study starts the (dreaded) request comes in: "While we're at it, can we make sure that people like the brand direction?"

<Insert big sigh here>

For clients who are new to user research this request is understandable. I mean, we have our target audience in the room so why not? It seems inefficient to conduct an entirely separate study to capture brand perception. What's the big deal?

Here's the big deal. Compared to usability recruits there are additional characteristics required of people who should evaluate the brand compared to usability recruits. For example, to get an accurate picture of brand perception, you should recruit people who have a mix of brand exposure. That is, include some people who are not familiar with the brand and some people who are very familiar. This requirement is less important for most usability studies as it usually has little impact on task completion rates.

Sample size is another reason why brand research cannot be wedged into a typical usability study. A typical study of 10 people will only give you high-level, directional guidance when it comes to brand, whereas 10 people will illuminate most of the usability issues in a prototype. So any findings regarding brand in a usability study would need to be verified with surveys and additional qualitative research to get statistical significance.

A third reason is time. A typical usability study takes at least an hour and investigating the brand is no small task. Brand research alone probably warrants an hour of time. So you've just doubled your in-lab time and most likely doubled your gratuity. If planned in advance, these additions are not an issue but they are very problematic when added to an existing study.

Ok, enough complaining. If you can't convince your client to conduct separate research what do you do? First, don't compromise your main deliverable. It is worse to do a sub-par job on both the usability study and the brand work. Instead, add a quick and dirty brand analysis and ensure the client understands its limitations. The best tool that I've found for this is a modified version of Microsoft's "Desirability Toolkit". The process is explained over at UserFocus as a way to measure satisfaction. However, it's also useful in our context.

In essence, users describe the brand using keywords and the results are displayed as a tag cloud in which a larger word indicates it was selected more often. Here are the results for a hypothetical example for Fake Company's brand:

blog

If the users' tag cloud matches the client's expected adjectives you can't be certain that you have alignment because the sample is so small, but it's a useful indicator. Confirming these findings with more research is required but providing clients with something to start with helps. And because it can be done very quickly, the impact on your primary usability work is thankfully small.

Experts on brand research please weigh in. I'd love to hear your thoughts on this method and how you would tackle this challenge.

The Key Ingredient in User Research: The Recruits

Like a fine meal, the recipe for success for any user research is somewhat complex. It takes careful planning, appropriate timing, and a few key ingredients. Far and away the most important ingredient is, not coincidently, the users who will test the concept.

The most rigorous and well thought-out methodology, the perfect study protocol, the ideal prototype and critically, a huge chunk of time and effort will be wasted if the study participants turn out to be sour. And embarrassingly, the soufflé flops as the client (who is footing the bill) watches.

Thus creating the appropriate screening criteria is vital. Here are some issues to consider when recruiting:

Recruit professionally and use floaters

Using a professional recruiting service is the best way to find great people. Not only can they help craft the screener questions, but they can spot potentially poor candidates that would otherwise slip through. And the time it takes to manage finding people in-house can be significant.

Professional recruiters can also set up floaters--people who wait on-site so they can participate in the study if (not when) there's a no-show. This is more expensive, but worth it when key clients are observing.

Manage segmentation-creep

More often than not, clients will quickly expand the scope of a study drastically by asking for deep comparisons between different groups of customers,  called segments. To compare segments this way you must include a reasonable sample size for each segment.

If a client wants to know if there are different usability issues for people on the east coast vs. the west coast, and for first-time users vs. experienced users, the sample size and analysis time explodes. Costs then rise quickly, so often it takes a little encouragement to limit extensive comparison across groups unless differences are likely but unknown and it's believed that these differences would affect the design.

Focus marketing-defined criteria

Sometimes the marketing department requests that we use their screener. This is great but only if it includes criteria that specifically defines actual users of the system. Sometimes marketing segments are defined too loosely to be actionable in user research. They can, however, be a great start to a focused screener.

Ensure technology expertise is carefully defined

For web usability studies the most important criteria to get right is experience using the web. This can be difficult to ascertain because people can't rate themselves well without some sort of baseline to compare against. A combination of age and several frequency of computer use questions are good proxies.

Filter out uncommunicative participants

Add a question or two to filter out people who may fit all the criteria perfectly, but then have little to say during the study. Self-reported ratings regarding "expressing oneself" work well to ensure vocal participants.

This list is just a start of course, but it captures some of the most important pitfalls. For more background and details on recruiting, Mike Kuniavsky's book, Observing the User Experience, is a great place to start.

User Research vs. Market Research

The latest issue of User Experience (UX) focused on the (dysfunctional?) relationship between market research and user research. Whether working as part of an agency or as an internal practitioner, conflict between the disciplines is a matter of when, not if. Let's consider some of the reasons why this conflict occurs and examine what can be done to mitigate the damage.

The conflict

The root of all conflict is the ego's fear of annihilation, no? Practitioners of both user and market research can feel that their work isn't appreciated by the other camp, i.e., their ego's are being rejected. Does this go beyond philosophical bruising? Absolutely. After all, financial resources (especially for research) are always limited. And like rival brothers, the disciplines compete in the same arena. Both are advocates for the user, yet each has its own very different skill-set and approach. Market researchers tend to value statistical validity based on what people self-report through surveys, whereas user researchers focus on analyzing observed behavior using usability studies and ethnography.

Perhaps market researchers have the upper hand as they're more established professionally and it's hard to argue against statistical validity. (Of course, for those in the know, it's easy to misunderstand the importance statistical "certainty".) For sure, it's a scrappy battle as user research has gained more prominence over the years.

The solution

As the above-mentioned magazine advocates, the solution is to bring the two disciplines together. Combining both qualitative and quantitative research paints a much more complete picture of the user, so arguably the disciplines are actually very complimentary. Of course learning (and thus respecting) both disciplines' methods is no trivial tasks. Math is hard (for many) and qualitative research is at times more an art form than a formula you can memorize.

This cross-pollination must occur, however, since the client does not care about such internal grumblings--they want actionable recommendations. User and market research working together will create a much stronger story and in the end better serve the ultimate client, the consumer.

The Power of Doing it RITE

I was once again reminded of just how powerful user feedback can be. And this time it was a RITE Usability study that reopened my eyes.

If you're not familiar, RITE stands for Rapid Iterative Testing and Evaluation, a variation of traditional usability testing documented by researchers at Microsoft in 2002. In short, you test a design with five users on Day 1, improve the design based on feedback on Day 2, test again on Day 3, iterate on Day 4, and then test the final design on Day 5 with eight users. A RITE study is not always appropriate, such as when there are many tasks or if the design is quite fixed, but whenever possible I'd highly recommend it. Here's why.

Some benefits of the RITE Method:

Team collaboration: The development and design teams REALLY get into it. With traditional usability testing it's sometimes hard to get dev and design to attend even one session, but here it's a requirement. The user feedback quickly initiates intense collaboration sessions which are just plain fun. And it's very rewarding when changes to the designs resolve problems found earlier.

Client satisfaction: If the client attends any of the sessions they quickly see the value of user feedback, as well as the team's problem-solving skills and creativity in action. Usually this voodoo is behind the curtain, but putting it in plain sight actually demonstrates the value of the work.

Time savings: There's a reason "Rapid" is part of this method's name. The changes between the first and final designs were absolutely dramatic in our study. (Unfortunately we can't show the screens due to client restrictions.) We tested a Flash-based tool for narrowing down TVs of interest from a large number of choices.

Though it's not practical to go through all the changes, there were dozens of improvements based directly on user feedback. Most importantly, the most severe usability issues were completely resolved in the final iteration.

While there are challenges associated with the RITE Method such as perceived higher-costs (I say "perceived" because arguably it's actually less expensive but it buckets the costs more up-front) and a demanding schedule, I think the benefits easily tip the scale.