Are we on the right track? Researching the user experience of our peers’ finding aid instances

By Meg McMahon

The Approach

In February 2023, Amy Deschenes, the Head of UX and Digital Accessibility, and I attended the online conference UX360. For me, the stand out idea of the day came from Zachary Schendel, the Head of Research at DoorDash. He spoke of conducting UX research in the style of a landscape review. A landscape review is a qualitative study of a small sample of peers' products. When he worked at Netflix, for example, he would conduct research on Amazon Video audiences about Amazon Video. He used the results from the research to confirm user behavior on Netflix’s interface and to understand features Netflix didn’t implement into their interface. 

This idea of conducting research on another company's product with their own audience was revolutionary in my thinking about the types of user research the URC could conduct to answer research questions about improving services at Harvard Library. It is no secret that many academic libraries use similar vendor products but implement those products slightly differently for their specific audiences. 

With this new lens for user research, Amy and I decided to use a landscape review approach to verify design choices for an ArcLight wireframe. ArcLight is a discovery tool designed to display archival description and digitized archival material, in short an online finding aid system. 

The Research 

We reached out to stakeholders at Harvard Library on the ArcLight project and asked them what other institutions had ArcLight systems that they wanted to understand better. Together we decided on Duke University and the University of Michigan (UMich). Both had design choices that our team could see possibly implementing and some choices that were the opposite of the choices the team was planning on making in our implementation. 

I conducted two separate user studies, one for each institution. The user interviews were slightly different for each ArcLight instance because each had separate features, but overall the interview scripts were designed to tie back to the five hypotheses below.

  1. In search results, “grouped by collection” is the preferred default presentation for search results.

  2. In finding aids, components should only be clickable if there is related digital content.

  3. In finding aids, the side navigation view should sync with the content that is displaying in the main content area.

  4. In finding aids, the “location” of where you are in the finding aid should always be viewable at the top of the screen and should not disappear as you scroll. 

  5. In finding aids, the left-hand navigation should only display the main hierarchical levels (series and subseries). Folders or items will display in the main content area.

Our desired participants were current users of the finding aids at their respective institutions. I reached out to the user research department of each institution and both groups were kind enough to send out our recruitment email to their users. 

We did have to supplement with Harvard participants but overall we were happy with our participants: for UMich we had 4 Michigan participants and 4 Harvard participants. For Duke, we had 5 Duke participants and 1 Harvard participant.  

The Results

There are two main strengths in a landscape review approach to UX research. One in that, early on in the design phase of new features a project team can understand possible problems that users would have with that feature. Additionally, it can point to features or future studies that the project team hadn’t considered that would be beneficial to their users. 

On the former point, I did reject one of our hypotheses after user research. It was “In finding aids, the side navigation view should sync with the content that is displaying in the main content area.” Both Duke and UMich had side navigation that was independently navigable and in both studies, all participants stated they would like the side navigation to be separately navigable from the main content panel, as opposed to being in sync. 

On the latter point, we found that most of the restriction messaging, or messaging explaining that an item was unable to be viewed by a researcher, on the finding aids was confusing. This was not a part of the design we had previously considered in our wireframes. During the studies we learned it was a pain point for users, so we decided a future study would focus on testing restriction information text on our ArcLight wireframes. 

The Conclusion 

After analyzing the results we created a list of enhancements based on our research for our current wireframes and I updated the wireframes accordingly. The next step for the wireframes is a user study to understand how our users process and understand restriction information. 

It was important to our team to share what we had learned with our colleagues at Duke and UMich to foster inter-institutional knowledge. We sent the report to our colleagues at Duke and UMich and had follow up conversations with their user research teams on what their users found challenging when using their instances of ArcLight. 

Overall, this approach to UX research is immensely useful at the beginning stages of a project where there are known similar systems or adjacent systems. While I am aware that this style of research in the tech field is probably not done in collaboration with other organizations, I am eager as a Library UX professional to implement this approach as a way to benefit both Harvard Library and our peers, by sharing findings and designing with the user in mind.