To explain the analysis planning that we’re doing at the moment, I’ll tell you about our broader methodology that we use for all our randomised control trials.
We have four phases: understand, build, test, and scaling.
The first phase is called understand, where we review the academic literature and we undertake research to better understand the problem that we’re trying to address.
In this case, we’re working to improve the career outcomes for disabled people.
We do field work. For our current project, we interviewed disabled people, senior managers, human resource experts, and other stakeholders to understand what the behavioural barriers are to career progression, and what the behavioural enablers are (what do disabled people want to see happen to improve their job prospects, and what programs and initiatives have they seen that work well). Using the literature, we scope the possible interventions we can test.
We see what studies have been carried out successfully in other contexts, and what we could test in New South Wales.
Then we move into the second phase, which is build. Once we have narrowed onto one behavioural intervention that is most viable, we co-design the intervention. We work with key stakeholders (e.g. disabled workers) and our agency partners to flesh out how the intervention will work.
We are in build phase right now, where we keep going back to the literature to look at other research trials that have used a similar methodology to what we’re planning. We’re also drafting our trial protocol (our methodology documentation), to step out each individual thing that needs to happen in order for us to go into the field and actually test our intervention. That means everything from the information technology that we need to deliver the behavioural intervention, to looking at the analysis plan. For example, we’re reviewing the effect size that we expect. Based on other studies, and taking into account the local context, how big an impact do we think this intervention is going to have? We use statistical theories and pre-existing data to refine this outcome.
We review what we’ll be measuring, such as our control variables. We analyse de-identified administrative data, to calculate randomisation and our outcomes.
Build phase ends with the evaluation of our trial protocol and ethics. The trial protocol includes everything from the literature that informs our intervention, to our analysis plan of what we are testing plus our hypotheses, how we calculated the effect size, the formulas were going to use in our analysis, and how we’ll deal with ethics considerations and risks to the project.
The third phase is test, where we deliver the intervention and then we measure whether it’s had a statistically significant result. That is, did our intervention actually work?
Most of our trials have a minimum of 1,000 participants. Analysis of results typically takes two months or longer. In some of my trials, we follow longitudinal outcomes 12 months later.
We present our results to our governance group and stakeholders. We make recommendations based on our findings, and seek their expert input on the next steps.
We then publish our results on our website, regardless of whether or not the intervention was effective. Did it lead to the positive behavioural change we expected, or did it have no effect, or did it have backfire effect (the opposite, or uninteneded, impact)?
The fourth phase is scale up. If our results were statistically significant, we roll out the intervention across the state.
This is a very tricky stage. We have positive results, and endorsement from senior executives to deliver the intervention as ‘business-as-usual.’ But now we have to liaise (and convince) multiple agencies to adopt our behavioural change. This requires planning, resourcing, and tweaks to the intervention in some cases, so it can be properly implemented.
Three of my projects are in scaling (two on vocational training, and one on education). More updates on this in future, I hope!