What They Say and Do: Practical Tips for Harvesting Reliable User Feedback for Planning
Jinfo Blog
30th September 2006
Item
[This article reflects key points in a presentation to be given by the authors at the Internet Librarian International conference in London, 16-17 October 2006 <http://www.internet-librarian.com/>.]
When a website, an intranet, an information service or a marketing/communications campaign -- you name it -- does not see the traffic it 'should' see (given the size of targeted user groups, the number of association members, the number of individuals in a demographic group and other guideposts), those in charge of that service are understandably concerned. Did we misinterpret what users were telling us? Did we not get the full picture when we asked them? Have users' needs shifted and we missed it? Were we even tracking the evolution of those needs?
Over time, services can, and often do fall out of alignment with user requirements. It is necessary to be vigilant and persistent in monitoring those requirements -- not just through listening to what users say publicly about their needs when asked, but also through observing what they do in practice. Further, it is important not to be tempted to go with unsolicited input (the squeaky wheels) but rather to launch a systematic scrutiny of what users and non-users alike think about the service in question.
User interaction with all things electronic -- websites, intranets, extranets, e-newsletters, web stores, e-marketing -- is a complex affair (unless, of course, that interaction consists of hitting the Delete or Close button). Some of the interaction we observe through traffic and click-path tracing tools; but such tools naturally can't help us understand what went through the mind of the individual who just clicked, much less help us predict what he or she might do in future. Moreover, they do not help us determine how representative the individual is of the total potential user group.
Those in charge of (re)building electronic content delivery and communication vehicles need a deep and crystal clear understanding of user preferences. But that's an oversimplification. We need to understand what they want to get done when they come to our site or service, and what they would like to have in hand when they leave. Once we understand that, we can work on making the process smooth and enjoyable.
To arrive at such an understanding, we need answers to questions about several facets of the user experience:
Context: What specific goals are users hoping to accomplish when they paste in that URL or click that bookmark? Do they have a concrete task to finish (what is the conversion rate for a currency; what is the latest news on an organisation), or are they conducting broader research (what are the NGOs saying about X issue; what is the medical consensus on Y drug; what is the market outlook for Z products)?
Mood and attitude: Are they under time pressure to solve a problem at work, or are they at home, browsing to see what's new or interesting in an area of personal interest?
Past experience: What are their expectations of the navigation and content options, based on past exposure to similar sites or services?
Subjective impressions: What features in appearance, navigation and content presentation do they find intuitive, easily followed, complicated, ambiguous, confusing, totally baffling or downright annoying? Can they articulate why?
Willingness to share reactions: Which of their many possible reactions ...
- Will they offer voluntarily? ('I really like ...')
- Would they offer if pressed? ('Well, now that you mention it,
I suppose ...')
- Would they keep to themselves? ('Saying I'm confused could make me
appear unsophisticated or ...').
Likelihood of and reasons for returning voluntarily: Aside from situations where users are forced to visit a site because there is no alternative, how interested are users in coming back, based on their initial visit?
- 'I accomplished what I needed to do eventually, but it was so
frustrating that I'll look for another resource'
- 'No other sites offer this content so I'll return, but not happily'
- 'I didn't accomplish what I needed to do, but it was so much fun I
think I'll go back another time'.
Investigational approaches
Having any hope of building and maintaining compelling websites, information services and communications campaigns necessitates a thorough understanding of existing -- and potential -- users' context, goals, attitudes, experience, impressions and desires. This tall order cannot be filled by an annual survey or the odd poll! It requires a systematic approach to assessing and monitoring user needs and then to implementing new tools and technologies geared to meeting those needs.
Below, we offer a set of practical tips on how to obtain credible evidence of user opinion. None are surprising, but that does not lessen the discipline needed in carrying them out regularly.
The practice of assembling input about user opinion goes by any name you care to give it. If 'audit' is too ominous a word in your culture, just choose some other designation (e.g. review, assessment) to indicate that some form of examination is involved. The key is to undertake a systematic investigation that will yield information upon which to act.
Below we address two scenarios: an audit of a communications programme and an audit of a website or intranet design.
In the case of investigating a communications programme, the communications audit is an effort to understand what users seek from the organisation or service in question. You're looking to find out where they go if they can't find the information they need from you and what other organisations or services (i.e. the competition) are succeeding in communicating to the target group so that target users seek them out first. Having such an understanding will help those in charge of communicating with the target market as they plan how to:
- Present the information that needs to reach the target audience in
such a way that it is useful to the audience and is reinforced
through several means
- Avoid overloading the audience with too much information for them to
absorb and use
- Present information in the way users expect or seek to find it
- Minimise staff workload as a result of increased number of
communications vehicles, message frequency and message
customisation
- Use techniques such as knowledge-sharing, stories, ideas and best
practices exchanges, and interactivity to serve the target group.
In the case of a website or intranet audit, the goal is to understand how the users experience their interaction with the site. What parts are intuitive for them, and where do they scratch their heads? What makes them say, 'Cool!' and what makes them say, 'Huh?' What's missing for them? Is there too much irrelevant clutter?
Ask, or watch?
In addition to the 'please tell me' method of investigation, the at- the-elbow observation technique yields valuable insight. There are two variants:
- Silent observation: 'Just let me sit here and watch as you go about
typical tasks, and allow me to ask why you did this or didn't do
that'.
- Easter egg hunt: 'Please find the answers, using this website or
using your memory of having received a promotional message, to the
following specific questions'. Users' success rate tells a lot about
what needs to change. A key advantage of the egg-hunt method is that
users participating in the study cannot get by with low-value
statements like, 'It seems alright', or, 'Yes, I heard of it'. They
actually must dig in and look for something specific. In the process
they must explain their search path or memory triggers.
Asking methods range from the highly involved (for example, an in- depth personal interview) to the minimally involved (for example, a survey). Similarly, observation and challenge methods range from the up-close, at-the-elbow, click level to the remote analysis of aggregated click statistics.
The mix of methods chosen depends on several factors including corporate culture, level of detail needed, size of total user population, amount of time available, the skills of staff members engaged in carrying out the assessment, and many more.
Investigational steps
These steps will help you decide what kind of investigation you need to launch and how to carry it out.
Step 1: What type of investigation is needed?
The audit pre-work consists of a determination of the precise kind of insight needed. For example:
- What do we already know? (Users have been complaining they can't
find ...; or it seems members of the target group haven't understood
...)
- What statistical evidence is at hand? What does it suggest?
- What do we know we don't know? (Is it a matter of training? Or must
the site be redesigned? What was confusing in our message?)
- Who do we need to hear from? (Did the complaints come from a certain
group of users or from across the board?)
- Who can undertake the work of getting the input? (Can we do it
ourselves, or should we hire in consultants -- who might obtain a
more candid picture?)
- What mix of methods is best suited for the situation at hand?
(Personal interviews, at-the-elbow observations, Easter egg hunts,
focus groups, a survey?).
Key pros and cons for each type of method include:
- Personal interviews: Advantages are that these unearth detail and
let you unlock the potential of candour in your subjects. They give
you the opportunity to ask, 'Why do you say that?' and help you
build a relationship with your subject. The disadvantages are that
it may be difficult to reach the 'right' interviewees through
personal interviews. They're time-consuming, so you can only reach
so many individuals.
- At-the-elbow observations: These are helpful for helping you witness
how people use your service. They let you see where users hesitate
or if they misinterpret a button, and they can be much more
illuminating than interview comments. Again, however, it may be
difficult to reach the right subjects through these, and they're
also time-consuming, limiting the number of individuals you can
reach.
- Egg hunts/communications effectiveness test: A benefit here is that
the documented success rate they produce serves as a mirror of how
well the site is designed and how well the communications campaign
succeeded. They isolate the key specifics that need to be addressed.
On the down side, users may not be keen to 'take a test'.
- Focus groups: A focus group is an efficient way to get input from
users (count on six to eight participants per 1- to 2-hour session).
Participants tend to feed off each other, driving more detail.
Scheduling makes this time-consuming, and it does require an able
facilitator who knows how to keep the group on track.
- * Survey: This will give you the opportunity to quantify how many
agree with X opinion and measure change over time by comparing
responses with the last survey's responses. However, respondents may
not want to take another survey, making for a low response rate and
unreliable results. It's also difficult to devise questions that get
at nuances in a site. Another hazard of a survey is that respondents
may 'upgrade' their responses, for instance click 'frequently'
instead of 'occasionally'.
Step 2: Clipboard -- getting the raw input
Once the audit process has been planned, the detailed work of 'sitting down with the users' and 'collecting the responses' ensues. The note- taking and documenting is quite time-consuming. If the examiners are emotionally attached to the site or activity in question, it requires a willingness to document the less flattering input as clearly as the more flattering input!
Step 3: Rollup -- what are the key themes? What might they mean?
Now comes the editorial phase. What are the key messages and takeaways? What should we do with input that seems to fall outside the key findings? How can the findings be translated into pointers toward directions? Does what we observed in one instance (e.g. with respect to a website) carry over into other areas such an e-newsletter or the intranet?
Step 4: Translation into specific plans
This is the stage when the difficult analysis takes place. Ask yourself: What does it all mean? How can the key themes be translated into concrete design or process changes? What will it take to implement them?
Step 5: Implementation and measurement
These last two steps may seem daunting, but they are important. First, having acted on user input, it is crucial to demonstrate to the study participants (and all others) that the input caused specific change. In other words: 'Thanks. We heard you. We acted accordingly. It will always be worth your while to participate when we come looking for input'. Second, we must find out whether the changes had the desired effect: 'Did we change the content or design in the right way? Has your issue been resolved? Is the site easier to use now? Are you better able to understand what we are communicating now?'. Naturally, traffic- and usage-indicators will tell their own story about the degree to which the entire exercise was successful.
Words of advice from veterans
As seasoned consultants in the business of helping our clients get the most out of their investments in services and content, we offer in conclusion a handful of tried-and-true tips for anyone about to launch a user input project:
1. Distance the owners from the input gathering. The team members in charge of building and maintaining a website or operating a service programme know too much and will have difficulty maintaining neutrality and objectivity as they receive user comments. They will tend to unconsciously suppress vital details. 2. Inflict no pain. Users are busy; make the input process smooth -- fun is even better. Easter egg hunt contests, with prizes to be won for (1) the highest success rate and -- this is key -- (2) the most illuminating commentary pointing to navigation issues, works wonders!
3. Reward participants. First, show appreciation for the participation through simple gestures -- coffee and cookies, thank-you notes to the manager, public acknowledgement. Later on, tell them what the result was (see item seven below).
4. Prime the input pump by making it safe for participants to be candid. Clearly signal they are in good company and that they will not be seen as untrained or incompetent if they admit they are confused: 'You are not the first to say that. We are intrigued. Can you elaborate?' Or, 'Your colleagues have indicated they find X ambiguous. Do you, too, find it confusing?'. Reaffirming that responses will be kept anonymous may help individuals feel more comfortable speaking frankly than might otherwise have been the case.
5. Secure future cooperation without narrowing user input to a pool of regulars. 'May we count on you again as we test the revised website? Could you suggest colleagues whose input may be useful in future?'.
6. At the end of the audit, it is vital not to just fade away. Having asked for input obliges us to show participants that the input resulted in change, or will do so. We recommend a summary to participants indicating not only gratitude for the input, but specifically a rundown: (a) we are able to do X immediately; (b) we can achieve Y in such-and-such a time frame; and (c) options for certain of the more ambitious changes are being investigated. Where a desired change cannot be implemented just yet, or at all, explain the reason specifically: 'We recognise that your suggestion is a desirable feature and regret that at this time, resources limit our ability to implement it' not only clarifies that the user input was valid and has been heard, but could be a means of generating some grassroots pressure for more resources! The key is to signal that you heard the input the participants offered up, and to keep everyone in the loop as to what is coming up.
Related FreePint links:
- Internet Librarian International 2006: 16-17 October at the
Copthorne Tara Hotel in London. Register now at
<http://www.internet-librarian.com/registration.shtml>
- "How to use an Organizational Information Audit to Determine What
Services Users Really Need and Want" video
<http://www.sirsidynixinstitute.com/seminar_page.php?sid=37>
- Post a message to the authors, Ulla de Stricker and Barbie E.
Keiser, or suggest further resources, at the FreePint Bar
<http://www.freepint.com/bar>
- Read this article online, with activated hyperlinks
<http://www.freepint.com/issues/051006.htm#tips>
- Access the entire archive of FreePint content
<http://www.freepint.com/issues/>
- Blog post title: What They Say and Do: Practical Tips for Harvesting Reliable User Feedback for Planning
- Link to this page
- View printable version
- What They Say and Do: Practical Tips for Harvesting Reliable User Feedback for Planning
Saturday, 30th September 2006
Community session
11th December 2024
2025 strategic planning; evaluating research reports; The Financial Times, news and AI
5th November 2024
How are information managers getting involved with AI? Navigating privacy, ethics, and intellectual property
- 2025 strategic planning; evaluating research reports; The Financial Times, news and AI
5th November 2024 - All recent Jinfo Subscription content
31st October 2024 - End-user training best practice research
24th October 2024
- Jinfo Community session (TBC) (Community) 23rd January 2025
- Clinic on contracting for AI (Community) 11th December 2024
- Discussing news and AI strategies with the Financial Times (Community) 21st November 2024