Understanding the role of different modes & methods of delivering user research in achieving impact: a reading list

Brigette Metzler
8 min readJun 12, 2021

Hoo boy…you can tell, can’t you, with a title like that, that I have been down a few rabbit holes…

Yes, I added a floof to the page, because you’ll need to come back and look at it’s lovely fluffiness from time to time!

So, this morning (yes, it’s been quite the ride, thank goodness it was a rainy day, and yes, it has just clocked over into tomorrow…), I asked a question on Twitter.

Firstly, I have to say, in my defence, I’ve been looking at this for a long while without much success. Goes to show that Twitter can be a marvellous place, and that many hands make light work. It all seems so obvious now…

I still stand by what I say in regards to the difficulty of finding research about user research — we are good at talking about delivery perhaps, but I’ve not seen any empirical comparative studies of the efficacy of different types of user research outputs (user journeys, stories, personas, reports etc) in achieving research impact. We talk about it a lot, and say things about our experiences, but I suspect the field is too nascent to have that kind of evaluation done — I’m hoping I’m wrong, and someone (my colleagues?) will set me straight.

What does seem to be true, is that the impact of the research (the outcomes) are still more commonly considered when people are talking about impact, but there are some notable exceptions.

I got quite a few replies, and some suggestions for search terms. That was super useful, because my previous method of looking at research impact, while it had indeed turned up the marvellous LSE Impact Blog (among others, see below), had led me down a path of measuring the efficacy of research. Some of this asks about the factors involved in efficacy, but it just wasn’t specific enough for me. I am curious to find empirical research about the modes and methods of delivery — what kinds of outputs, and delivery, are more successful (face to face, online — if online, what are the characteristics of a good UI for users). I’d found a few things, but not this specific problem (though the LSE Impact Blog did indeed have something, I just hadn’t looked recently!)

Turns out there are quite a few fields across a number of disciplines that look at this stuff. Obviously. We spend a LOT of money on research, and on the mechanisms of delivery. I was always suspicious that my search terms were failing me. I should have asked friends a lot earlier. I’m still not quite satisfied, though two papers were absolute stand outs.

Of course, this is a reading list — I’ve now read a lot of what I list below, but need to dig further. So many people were helpful, and others were interested, I promised I’d report back on what I found.

The brilliant Alba Villamil gave me a great framework for thinking about this, and I will use it going forward as I work through the reading list a second time.

Given there are a lot of fields to draw from, I’ve not used the above framework yet. Rather, I’ve separated them out into fields, as this will be the easiest way to keep digging.

Science Communication

I knew this one of course and I’m the one adding this in — I’ve worked with a science communications grad in a previous work environment, and knew that the field had a lot to provide. I guess in the context of looking at the modes and methods of delivery of research for user research, I felt the modes (the research outputs) were likely to be so vastly different, that I’d not find much of use to apply in my context. This book chapter ‘From Science Communication to Knowledge Brokering: the Shift from ‘Science Push’ to ‘Policy Pull’ looks good however!

User Research

Behzod Sirjani talks about the mechanics of user research a bit, and Alba offered his blog post The Organisational Appetite for Research as a suggestion.

Yasmin Amjid responded, and provided a blog she’d written based on her experience.

User Research Libraries:

The ResearchOps publication has loads of course, such as ‘I built a user research repository, you should do the same’ from Jonathan Richardson. This blog by Jake BurghardtExtending insight ‘shelf life’ to get more value from research in product planning’ is also useful. This one by Elena WoiciechowskaHow to get started with your first research repository’ is great and a good checklist.

Stephanie Marsh is a person who has built a user research library and she does a good write up here: Working Towards User Research and Insight Libraries.

Richard Smith (all round fabulous guy, and has been on the ResearchOps podcast too) built the Hackney City Council user research library , and it is public — lots to be learned! Snook wrote about doing this work with Richard as well.

Salma Patel also designed and built a user research library at Ofsted.

Of course the two most famous non-gov user research libraries are Microsoft’s HITS and Uber’s Kaleidoscope

For insights repositories, you can’t go past WeWork’s Polaris, by Tomer Sharon and team.

Knowledge Translation

This was the prize of my weekend, ‘A systemic review of barriers to and facilitators of the use of evidence by policymakers by Simon Innvar, Theo Lorenc, Jenny Woodman & James Thomas.

This one was also brilliant but didn’t dig into the mechanisms of knowledge transfer enough for me (another keyword search I hadn’t thought of) — more please! How can research organisations more effectively transfer research knowledge to decision makers? by John N Lavis, Dave Robertson, Jennifer M Woodside, Christopher B McLeod, and Julia Abelson. I really loved this paper too. Stand outs, both of them.

This also looks useful in terms of being able to apply a framework and a theory to one’s approach: A scoping review of full-spectrum knowledge translation theories, models, and frameworks by Rosmin Esmail, Heather M Hanson, Jayna Holroyd-Leduc, Fiona Clement

Research Evaluation:

So, of course there’s a whole field of this, research evaluation, and I feel silly for not thinking of it — one of my colleagues at work has a masters in research evaluation. I guess it’s just a sign we need to talk more. I’d always thought of it in terms of evaluating the quality of research, and thought it sounded like a very complex field that I should probably leave to the experts :)

Keryn suggested I look at Carol Weiss, and indeed, it seems she spent a lot of her life looking at research evaluation and impact and got specifically into those modes and methods of delivery that I am interested in. Here is a paper that looks good: The Many Meanings of Research Utilization

Readiness to change:

Suggested as a search I could do by Prof Gail Langellotto. I found this paper on ‘Key Questions to ask when putting together a theory of change for research uptake’ by Andrew Clappison.

I found this one by the ODI, ‘Developing capacities for better research uptake: the experience of ODI’s Research and Policy in Development programme’, and realised I’ve read it previously. It isn’t specific enough to my question, though it does note in the lesson’s learnt that for their report, there was more interest from researchers in their field of interest than in understanding what influences research uptake more broadly (pg8). They also concluded they needed to pay closer attention to who the audience was, and that the length of contracts also influenced what they had time to do in terms of influencing uptake. They noted power imbalances also impacted buy-in. Capacity development (known as research evangelism is UR) was listed as a key factor in uptake.

Research Impact measurement:

This was my original rabbit hole from when I started thinking about this stuff back in 2017/18. Back then, I used a lot of Prof Mark Reed’s frameworks. Measuring impact is a significant part of the work of building a library for research that will never see the light of day, or be cited. In research impact work, I never found any work that had been done on understanding the role of the design of the research output. Lots of references to making things easy to read or involving people in the research process to maximise your impact. No empirical evidence on the media, the design of the outputs or the delivery (if digital) UI. Hoping this little list might provoke someone to point out my egregious errors, and send me through some citable research!

I note this fascinating new piece ‘Writing impact case studies: a comparative study of high-scoring and low-scoring case studies from REF2014’ by Bella Reichard, Mark S Reed, Jenn Chubb, Ged Hall, Lucy Jowett, Alisha Peart & Andrea Whittle.

Here’s Mark Reed’s Fast Track Impact site, which simplifies the process of measuring impact.

Here is Richard Smith on measuring research impact. A cursory glance at the delivery right at the end. ‘the process of engaging people is more important than the product.”

Even the ARC seems to ignore the mode and method of delivery entirely in the Research Impact Principles and Framework.

Here is the LSE Impact Blog (the info about the methods of delivery are hidden amongst the blog posts. This one is a good comparative review and an absolute banger in terms of my original question: Putting the collective impact of global development research into perspective — what we learned from six years of the Impact Initiative

Library UX

This is a field dedicated to library user experience. One of the first blog posts in the ResearchOps Community publication was from a Library UX professional, Kelly Dagan. As this article (Can User Experience Research Be Trusted? A Study of UX Practitioner Experience in Academic Libraries) points out, it is an emergent field, much as it is in user research. Another tricky part of this, is that as people working in larger libraries, the connection to the research is more tenuous, and there’s less need to measure the efficacy of the research outputs, and more need to ensure people can find the research in the first place. I am guessing it would be the difference between a regular library and a special library. So it is one half of the puzzle, but perhaps (in my cursory glance) doesn’t cover off much about which modes and methods of delivery produce the best results in terms of research uptake.

Here’s another of interest, on Structuring and Supporting UX work in academic libraries by Shelley Gullikson.

Of interest in framing who users are in libraries is this short, pithy article on The Anatomy of Library Users in the 21st Century

ALIA (not a field, an association who produce reports on special libraries) but absolutely worth adding to a reading pile:

In Australia, we have the Australian Library and Information Association and then the Australian Government Library and Information Network.

AGLIN did a review into Commonwealth Government Libraries in 2016/17. Here is the Research Report and the stage 2 report.

In looking at the return on investment for having a user research library, this report is useful: The Community returns generated by Australian ‘special’ libraries (2014).

Given I have received, read, and written this up in not quite 24 hours, I expect some errors. Please feel free to point them out, but be kind, it is currently 1.39am as I type this!

Many thanks to the following people on Twitter who helped me assemble this list:

--

--

Brigette Metzler

researcher, counter of things, PhD student, public servant…into ResearchOps, HCD, information architecture, ontology, data. Intensely optimistic.