As with most of my weeknotes these days, these weeknotes are structured around the 8 Pillars of User Research — I use these to help me keep the scope of my role knowable, to me, and others. I use the Pace Layers Matrix to structure my research operations strategy in my everyday work.

This week’s post is all about legal aspects of ReOps. Below, I talk about the work of creating trust environments. Security and privacy have something in common with love, perhaps - if the locks are anything to go by. Photo by Morgan Petroski on Unsplash

What did you do?

This week, the wonderful Jo Brennan joined the Capability Team (one of the two teams I sit within). Jo is a legal and privacy expert, and so you can see I was clapping my hands with joy. Research Operations has 4 pillars that are about creating the right environment to scale the impact of research, and 4 pillars that are more technical. Each of those technical pillars comes with legal aspects and is actually a fairly complex space, especially in government.

Towards the end of the week, Jo and I got together to discuss the legal, and privacy challenges I was facing through each of the pillars. It was a long meeting, and she was so gracious in giving of her time. It was just delicious to talk to someone who not only gets every word you are saying, but can add more, has experience in that thing you’re struggling with. It was an absolute joy.

I thought it might be useful for any research operations people reading if I did a little deep dive on that, using our conversation as a starting point. It won’t be exhaustive, otherwise I’ll be here all day (and besides, I’m not the expert, so please don’t sue me for missing stuff out etc!), but hopefully it will provide a useful resource as a starting point for anyone thinking about this stuff.

In Australia, when we think about that complexity in human research, we think of:

Plus general legislation for government employees:

But in each pillar there are also other factors to consider. I’m going to skip the 4 non-technical pillars for today, and just do the technical ones:

Recruitment and admin

In the talk I gave recently (see below), I pointed out that if you have no research participants, you have no research program. It is crucial to get it sorted. No matter the research method, or the scale of your research program.

Under the recruitment and admin pillar, Jo and I discussed using vendors for participant recruitment. Some aspects to consider are:

  • Public Governance, Performance and Accountability Act 2013 — we always have to make sure the use of funds meets high standards of governance, performance and accountability; that we provide meaningful information to the Parliament and the public; that we use and manage public resources properly; that we work cooperatively with others to achieve common objectives, where practicable; and that we to require Commonwealth companies to meet high standards of governance, performance and accountability. So, saying yes to using a vendor for participant recruitment isn’t something we take lightly.
  • Data security — where is our data stored, how securely is it stored? In Australia, the Privacy Act outlines Australia’s data sovereignty in the Australian Privacy Principle 8. Here’s a handy guide to APP 8
  • The NHMRC Guidelines National Statement on Ethical Conduct in Human Research (2007) on informed consent and risk with respect to how (if) we share data between the vendor and ourselves.

Jo and I also discussed panel management. There, some of the things to consider include: storage of personally identifiable data — including fitting into your organisation’s existing records management and data management policies, thinking about whether you already have appropriate mechanisms in place for the storage of such sensitive data, how you will govern and administer that data over the course of its life (including archiving policies, deletion policies).

Of course, tied into panel management is the issue of consent. If you are currently creating a consent form for each research project, a different approach needs to be taken for panels. A panel tends to be made up of people who’ve said yes to taking part in research over the longer term. One way to think about this is that it becomes a longitudinal piece of research, even though the topics researched over time may vary, the overarching goals of the research remain the same, and so due consideration must be had as to how a consent form needs to be crafted. Research ethics needs careful consideration as time adds a greater chance of consent being varied (by the participant or the researcher — see 2.28 of the Statement)— not just at set up, but right the way through — therefore ethical reviews need to form a part of the administration policies.

Alongside panel management comes due consideration to participant experience. In order to provide a good participant experience over the longer term, there needs to be a mechanism to store and access participant information that is legal, ethical, and also workable. It is important to be able to update some kind of database where we can track research projects and consent and the little important things like, did we send a thank you email, did we receive feedback, have we followed that up? Are there any issues with their experience (such as accessibility, or access to the internet in these remote research times, or things we can learn for next time?). In addition to this being covered in the ethical review conducted at set up, in government, access to personal information is very tightly controlled (even, or perhaps especially, internally) and is usually dealt with according to legislative reason the information was gathered in the first place. Personal information is also usually stored in systems designed to be very, very secure and largely inaccessible, even to employees. That makes making those little updates very difficult!

Incentives also comes under this pillar, and we discussed a request for a ruling we have with our legal team over whether research participants should receive a payment for their time. This issue comes up in lots of situations where the receipt of incentives may carry an ethical risk — you see this in healthcare for example, civic tech, and legal tech.

Data and knowledge management

In this space, we are breaking newish ground really, though there is a growing group of people who’ve tackled this — with all this attention given to research data over the longer term, and appropriate, governed, ethical access to it more broadly, we also need a mechanism for storing research data that is legal, ethical, meets data, information and knowledge management policies. We need to consider the storage of data that may be considered personal, such as videos. How do we make these accessible to researchers as covered by the consent form whilst also complying with all these policies? How do we make the data more easily discoverable, searchable and usable?

We have to adhere to our records management and information management policies, and with the Archives Act. A lot of our current policies are made for different ways of working — COVID has really accelerated our move to collaborative, online spaces for example, so the way we make documents, work on them, and store them is changing. This means engaging with those areas to see if amendments are necessary or possible, or exemptions, or if new processes can be designed that do fit within existing arrangements.

Luckily, I have some experience to lean on to achieve this, but it’s always context dependent, and working through this is covering several stages — our initial stage, where we got the consent right, and applied some initial governance processes to enable us in the short term. Now looking at the mid-term, and how that looks, including thinking through the research lifecycle and what our governance processes might look like where we have requests to access information.

Of course, we need a mechanism, somewhere to store all the relevant information. We need to make sure it acts to augment the research process, not hinder it. We need to make it fit for purpose for users. We need to also give thought now, to the longer term, where we want to be. I can see the long term scope is broadening daily, so trying to make decisions about how broad to make the middle range scope is difficult. I don’t want us to go so broad we get stuck in a legal quagmire and find it too hard to move. Though the legal stuff might not immediately come to mind on this, it absolutely does — getting this stuff right is the key to making the effort invisible later on.

I absolutely want us to be in a position in a year’s time, where a researcher, or anyone accessing the research outputs’ biggest pain point is something as small as the thumbnail of the image not being big enough, or their topic having a different term than the one they would use. Let it be that anyone using our library says something like, ‘it doesn’t look that difficult’. Simplicity is hard work.


The trick about putting in place a research operations function, is you’ve stated you are ready and committed to doing a bunch of change work. If you make it someone’s job to govern research data, then you have to do it right. All the carpets are pulled up, and everything that’s been simmering away under the carpets gets pulled up and made visible. You might find yourself questioning if doing that was the right thing.

If you take that analogy, I’d suggest that yes, you’ve made it harder for yourself, but having all that stuff accumulating dust under the carpet is not doing your longer term health any favours. What we found when we did the ‘What is ResearchOps’ project in the ResearchOps Community, is that a lot of researchers were suffering in silence. Feeling like, or even knowing they weren’t really doing things right, but having no words to communicate this unseen work. There was a silence in the industry that was contributing to overwhelm, burnout and unhappiness. Doing that project was almost like a collective trauma session for researchers. The tricky bit now, is how not to just transfer all that unseen work to the shoulders of one person. ResearchOps is very much a team sport — whether you’re a ReOps team of one, or a whole team, you will be working with specialists across your organisation. With governance, and each of the other technical pillars, you are going to be making friends with security, ICT, legal, privacy, procurement, data, information management, and knowledge management people. If you can establish collaborative working relationships across all these areas, you’ll have a kind of exo-team (like an exoskeleton), a team outside your team, all working with you to reduce friction and make the system work. No wonder ResearchOps people are known as the connectors across an organisation!

One of the specific challenges Jo and I talked about in the week just gone, was the fact that longer term governance is something that arises a lot with Research Operations, where you really do take something that has been ad-hoc to something that now includes considerations of time. Suddenly something that is relatively fixed, becomes a lot more organic, with a changeable scope.

We talked about data and knowledge management, and research participant recruitment above, so other parts to consider are program wide governance — things that cover all of it, such as privacy (the APPs), and the way GDPR and CCPA influence the world really, when it comes to management of research about people.

Something I’ve learned previously with the super smart people I’ve worked with before, is that there are three main things to consider with program wide governance — the original research participant, the ethics of how we behave with research data and outputs, and any approvals that need to come from stakeholders. Ultimately, I believe research operations requires a high trust environment, and governance is the thing that creates that trust.

Being explicit about what people can expect from us as operations professionals, and what we expect from them is the work of policies, processes and procedures. As a person who hates admin and is really bored by ticking boxes, it surprises me daily that this is my job — one I willingly and intentionally created, alongside a bunch of other people I wouldn’t have picked as getting much satisfaction from rules! What I will say I guess, is that I notice that research operations people are people people, helpers. Empaths in many cases. Trust is everything, and so we hang out here, try to create the capacity for trust at scale.

Tools and infrastructure

When it comes to tools, there’s a shift when you bring in research operations — you bring in an extra gatekeeper in a sense, but you do it for several very good reasons — firstly, assessment of user needs around tools and comparative analysis takes time, so making it someone’s job removes that weight from the shoulders of individuals who are just trying to get their work done. Secondly, there are administrative and financial gains to be made if you can take a program wide approach to procurement. This is often done at the organisation wide level, but having someone navigate that process on researchers’ behalf makes sure it gets done. Thirdly, administering the tools also allows your ReOps professional to do data management, and to maintain an eye on research assets that might not be stored in a central spot. This helps with meeting that need to be able to learn from what we already know. Being able to see how people arrive at the insights and findings they arrive at is a powerful, seldom available tool in a research lead’s toolbox. For a research operations person’s perspective, they’re often the person in charge of data governance within a research program, so administering the tools researchers use helps them to be able to do that part of their job.

In our chat, Jo and I had a wide ranging chat about Australia’s data sovereignty laws, data storage challenges and considerations this presents, meeting the APPs (all linked above), and the flags that a vendor’s compliance with GDPR and CCPA tells us. For example, if a vendor is GDPR compliant, it tells me they have a Data Processing Addendum (or Agreement). It tells me they have ways of ensuring they can delete personal information and policies on data storage. If they are CCPA compliant, it is a flag to me that they will be very, very explicit about what kinds of personal information they are storing, when, and how. This isn’t all we need to consider of course (and I’m no legal expert!) but they are good things to look for.

We talked about the need for ground rules in security and risk management, and processes to follow when those need to be reconsidered, or where there’s a lack of clarity. Doing new things in new ways means uncovering new questions, or asking old questions in new contexts. Again, those ‘mechanisms and processes that set user research in motion’ arise — how do we draw the map, put out street lights along the way, and show where the potholes might be? There’s a lot of work in that, and also a lot of communicating. I for one, am very glad to have another person in my ‘exo-team’ (my team outside my team) to help me do just that.

What are you thinking about?

When writing this post, I was reminded of something that I’ve found a little frustrating over the years. That is, I find the Statement on Ethical Conduct is a bit lacking in these spaces where longitudinal research occurs, or where co-design and in-house research panels are concerned. As I see academia embracing the idea of research data management over the longer term, including encouraging consideration to be given to the use of data over the longer term, I am hopeful that some effort may be put to thinking about the changing landscape in human research and addressing this within the Statement.

Anything else?

This week I did a talk at SavvyUX on Getting Started with ResearchOps. I’ve done this talk a lot of times, but the lovely thing about it, is each time I do it, I get to go back and review what we’ve already said, learned, and done in the ResearchOps Community. Each time I get to extend the talk and add new insights. For this talk, that meant a deep dive on each pillar using the Pace Layers Matrix.

It was also nice to be able to spot some of the ways my inexperience showed up in previous work. One of the things about working in the open is that your work follows a learning trajectory that you can never anticipate at the start. Back in April 2020 at UXInsight, I said, I hope we get to a point where we look back, and we are a little bit embarrassed. Because that means we learned something. We grew. It’s really hard to walk the walk and sit in that discomfort. I know people have told me it stops them working in the open, having to accept they’ll need to swallow their pride at some point, and be found wrong.

Because of that discomfort and reluctance to put oneself forward, I just wanted to point out that it happened, and say, yeah, it was hard. Uncomfortable. Humbling. But I’ll choose to do it again and again, because I want to live in a space of learning and growth. Choosing to do that openly just helps expose where I’m wrong more quickly, and with a greater deal of rigor than if I were working in silence on my own. The cost to the bruised ego is worth it, (so far).

researcher, counter of things, PhD student, public servant…into user research, information architecture, ontology, data. Intensely optimistic.