Skip to content

Usability testing on our OPAC

I have a confession to make: until last week, I hadn’t ever done any formal usability testing.* In many ways, we’re still working our way through the findings of a usability test done several years before I got here.** So when the Joint Library Catalog (JLC) Online Public Access Catalog (OPAC) Working Group kept going around and around, between ourselves, about interface decisions, I suggested maybe we should test the interface with some of our users. Happily, they agreed!

So last Friday I headed over to one of the public library’s branches. The branch manager is a pretty great guy and had scheduled four users, with varying levels of technology comfort and old-OPAC familiarity, for me to work with. (Which is doubly awesome. One, he knows his patrons and did a better job than I could in picking people; and two, participant recruitment is the only part of the process that I am actively afraid of. I am unwilling to walk up to somebody and bother them. I find it really unnerving when people do that to me.) It was a good situation: he was grateful that I was testing with his patrons, and I was grateful that he did the worst part for me. :)

Because of snow (no joke), two of the participants didn’t show up; we found a third, though, and I think the public library user population was reasonably well-represented. We had a homeschooled teen (middling tech familiarity, high OPAC familiarity), a security guard (low tech familiarity, high OPAC familiarity), and a stay at home mom (low tech familiarity, low OPAC familiarity). I had previously tested my test on my husband, so, although it wasn’t recorded for the rest of the committee to watch, I had seen a high tech familiarity, low OPAC familiarity user in action, as well.

I ran the tests a lot like Steve Krug suggests, in Rocket Surgery Made Easy. I did not, however, have a room full of people listening in real time, so I modified his scripts and checklists heavily. As I said, I recorded the sessions, because I wanted the rest of the committee to have the opportunity to see users in action, without librarian assistance. It was a good chance to finally learn how to use Camtasia (and I am never going back to Captivate, ugh). I’m even a fairly competent Camtasia user, now, since, once I had the recordings, I had to combine multiple recordings into one stream (Camtasia crashed part of the way into a test), hide people’s addresses (possibly using an image from GitHub), and mute the part where I read my PIN out loud.

On that note, I learned several things about usability testing, doing these tests.

tweet

  1. Don’t just bring a mouse; also bring a mouse pad.
  2. Bring the dongle that connects the stupid laptop’s USB to stupid Ethernet cables, or you’ll be stuck on stupid slow library wifi. (Which might have more closely approximated the user experience in the wild, but was frustrating during the test and on the videos.)
  3. Consider rephrasing the “occupation” question. None of the users seemed bothered by it, but I felt sort of bad.
  4. Have a test account – This would have saved me time with video editing and also helped protect everyone’s privacy, including my own. (Something my committee colleagues know, now: my account is delinquent due to an old $0.25 fine. Something they don’t know: my other account is blocked, due to >$10 in fines. Glad I had two, or I couldn’t have done the “Place a hold” part of the test at all.)
  5. Be willing to stop the user when you aren’t learning anything new. Related: when they seem frustrated, be willing to drop tasks. This is part of my ongoing assertiveness training (which people who know me really well think is unnecessary, but, actually, totally needs to happen; I’m a mess with strangers).

I think there are further things I can clean up in my version of the script, too. (If you scroll down, the blue text is the list of tasks.) For instance, the series task was a disaster, and I’m not doing that again, unless Sirsi builds in better series searching.

But now we’re kind of moving into things I learned about Sirsi Enterprise:

  1. The Advanced Search is a mess. (We can’t fix this one; we’ll have to ask Sirsi.) It doesn’t carry in the parameters of your last non-advanced search. If you do an advanced search and hit the “back” button, it doesn’t remember what you had filled in. It’s just terrible.
  2. It’s hard to get to the holdings, at least for users trained on our old catalog. (Of the two untrained users, one got there OK, and one did not.) They kept looking at the “Place Hold” and “Text This to Me” buttons, as if those would save them, when they were “supposed to” click on the title or book cover, to get to the item record. We’re going to add a button that says “See Copies” and see how that works. (I hate adding stuff to an interface. As a rule, that’s usually the wrong call. This might be one case where it isn’t.)
  3. The holdings might be a little hard to interpret, even when you get to them. This is something I plan to focus on in my next usability test, if they let me do more. :)
  4. We made a bad call in removing the “item type” icon, apparently. (We replaced it with text. The icon alone was really unclear; “book” was a blue book, and “journal” was a green book, just for instance.) People kept clicking on items that weren’t the type they wanted; clearly, they didn’t see the text without the icon. So, we’re going to put the icon back, with the descriptive text. I think that will work nicely.
  5. The login form to Place Hold/My Account is dumb. It doesn’t remember your card number between form submissions (say you got your PIN wrong—retype all those numbers!), and if you hit “enter” before you enter your PIN, it clears the form. Dumb.
  6. People’s interpretation of the facets vary wildly. Some notice them, and some don’t. Of the users who notice facets, some of them “get it,” and some don’t. Some want to use facets to broaden their search. Some want to use the drop-downs at the top of the screen to narrow their search (I mean without hitting “Search” again—it doesn’t work that way). Some don’t notice the Include/Exclude buttons, which actually apply the facets. Some hit the Include/Exclude buttons without first applying limiters. It’s all over the board.
  7. The ones who “get” facets seem to expect them to be persistent between searches. That really should be an option, Sirsi!
  8. The “Only Show Available” button is nearly invisible. It needs to be flush with the left-hand side of the column or something.
  9. Sirsi should build in something that senses whether someone’s searching for a particular series (2/4 users included the word “series” when searching for a series, so there’s one hint) and does a better job of dealing with that. Seriously, guys, it’s 2013. As a librarian, I know not to trust the catalog when I want a book in a series; I Google for the author’s website and bring the title back to my OPAC. But that’s stupid. Patrons might not know to do that. None of us should have to do that!

*I mean, I had tested prototype pages with students working in the library, to help decide on our approach when combining major areas of the website; also, a coworker and I got student workers’ feedback on the two discovery systems we were considering. I still defend those tests on the following grounds: the students’ knowledge about “library stuff” varied from total expert to total novice. But they were informal in that we didn’t record the sessions, recruit outside the organization, or offer payment of any sort, other than, you know, doing the tests “on the clock.”

**Students don’t know where to start. OK, well, we got Summon, which (numbers suggest) has helped a lot. But if Summon fails you, then what? How does a web designer make the multitude of options clear and intuitive? … Yeah, we don’t know. And I can test other stuff—and will—but when your site can’t meet its primary goal, it makes testing the specifics a little less exciting.

Published intechnologyusability

2 Comments

  1. D'Arcy Hutchings

    Thanks for the post, Coral! I’m looking forward to seeing the new interface (the way we choose to configure it — I’ve seen it in use on other libraries’ sites). A couple of screen shots would be helpful here for us to easily follow your assessment, if that’s possible. I loved your tweet. I can only imagine how hard it was to sit quietly and let them struggle!

    • I wanted to include screen shots, but Mike has already made a bunch of changes. I wish I’d captured some, the day of the test!

Leave a Reply

Your email address will not be published. Required fields are marked *