Today was a huge day with back to back meetings because we’re moving from planning our methodology into testing the reality of implementation. In other words: we know what we want to test based on the research literature and our fieldwork, but how do we actually do this? Unlike many academic studies, we don’t test in a lab with ideal participants (most behavioural research is carried out on university students!). We deliver behavioural change interventions and measure outcomes with a representative sample of the population.
Our trial is testing whether our behavioural messages will lead to behavioural change on diversity and inclusion awareness.
We caught up with our colleagues who recently ran a trial using a similar methodology (they sent out email meassages to encourage uptake of training). We talked about the realities of the technology we’re going to be using to push out the intervention (email campaign software). For example, how do we track (de-identified) clicks on links, what happens with email bounce backs, what types of questions did people ask, and more.
We also discussed some of the issues they’ve encountered. This includes managing partnerships with multiple stakeholders and getting authorisation from different areas (human resources, data experts, and so on). These are all highly detailed and important day-to-day realities of running a randomised control trial.
We followed up with a couple of our trial partners. In one meeting, we were talking with human resources. They’ve been an integral partner. The key person we’ve been liaising with is leaving. She’s a beautiful person, and we’ll miss her. We met with her and her replacements. Everyone was lovely. We talked about how we’re going to transition that role.
Then we met with the IT experts helping us on our implementation. We discussed the tool that we’re going to be using to deliver the intervention. It turns out that as with most tools, it’s not going to be a straightforward as we would like.
We had a few meetings about the analysis plan for our trial. The principal analyst and myself are going to be doing the first round of analysis. We also met with our senior data analyst. We’re trying to work through how we can test our assumptions and how we can work better together to improve the trial design.
All of that, as you might imagine, adds up to a very long day. But other work still needed attention. For example, because I co-manage our team’s public communications, I had to answer a couple of comments that came through from the public about our published blog posts. I also followed up on the migration of our website.
I will now need to spend a couple of hours finalising results for another trial of mine which I’m scaling. That is, our intervention had a successful outcome, so we are now rolling it out to the rest of the state.
I’m exhausted! Tomorrow, I’ve got the day off. A rest is much needed.