Statistics can be intimidating for seasoned researchers, but for lived experience experts with little to no formal research training, it can feel overwhelming. Our Lived Experience Advisory Panel (LEAP) recently did some training as part of the Target Trials project.
This project is using complex statistical methods to assess how effective different NHS mental health therapies are within the UK.
LEAP member Alex Hubert, another LEAP member and our PPI (Patient and Public Involvement) facilitator George Clarke share their reflections from the training below.
Account 1 – LEAP member
I finished the day feeling not just included, but needed
I arrived at Monday’s Target Trials training session with butterflies in my stomach – and a notebook covered in question marks.
“Causal inference” isn’t a phrase most people use over coffee, and I worried the content might fly straight over my head.
Two hours later I was sketching arrows on a virtual whiteboard, debating whether shift-work or social anxiety is the sneakier source of bias. Somehow the fear had turned into genuine excitement.
Principle investigator Matt broke the ice with the sunburn-and-ice-cream story. This was an analogy that he used to explain what confounding variables are.
We looked at the relationship between increased sunburns and increased ice-cream eating; because one increases at the same time as the other, it could imply that one causes the other.
However, that isn’t the case at all, because the variable that they have in common (or, the confounder) is sunshine – when sunshine increases, so does ice cream eating and so does sunburn. In other words, the desserts aren’t to blame for burning skin!

In one move he translated “confounding” from a textbook term into something you can picture on a July afternoon.
From there we built our own diagram – a Directed Acyclic Graph (DAG) – but it felt more like causal doodling. ‘Directed’ means each arrow goes one way (cause ➝ effect), and “acyclic” means you can’t loop back on yourself.
A DAG is like a flowchart for cause and effect – it helps you map out how different factors might influence each other, without getting stuck in circular logic. It makes your assumptions visible, so your analysis stays honest.
Each box we dragged onto the screen (location, disability, baseline mood) had an immediate consequence: if we can measure it, great – if not, the gap stays red as a reminder.
Watching the diagram evolve in real time made the statistics feel tactile, almost game-like.
This was training session three in a mini-series for the lived experience advisory panel on the project.
Having those earlier layers already in place was a lifesaver: I recognised the RCT jargon as soon as it appeared and could follow the leap to real-world data without back-tracking.
My own quantitative background helped, but it wasn’t essential – several LEAP colleagues with no stats training were asking sharper questions than I was.
The biggest takeaway is an emotional one. I started the day thinking advanced analysis was a gated community; I finished it feeling not just included, but needed.
Our lived experience supplied variables the spreadsheets had missed – who else would think of night-shift schedules as a potential bias? That moment, when the team coloured our idea black on the DAG, made the entire project feel jointly owned.
If there’s one thing this session proved, it’s that you shouldn’t start with statistics – you should start with ownership.
What made the training transformative wasn’t just the explanation of randomised controlled trials or causal diagrams, it was the invitation to actively shape the process.
We weren’t just learning about how research works, we were contributing to it. Using tools like DAGs became less about technical knowledge and more about creating space for lived experience to challenge default assumptions.
That emotional shift — from outsider to co-creator — is what made the statistics meaningful, and the learning stick.
Account 2 – Alex Hubert, LEAP member
It was easy to think concepts were beyond my grasp
With compassionate enthusiasm, over three interactive sessions using Zoom, researchers and staff from McPin and Sheffield University taught us about research.
- How research works
- Understanding trials
- Real world trials
How research works focused on research structures and processes, understanding trials provided an introduction on randomised controlled trials (RCT), while real world trials highlighted how health data can be used to study treatment efficacy when RCTs aren’t possible.
It was clear that a lot of preparation had gone into making every session as accessible as possible.
For example, in the understanding trials section we were asked to shuffle a pack of cards, then draw a card one at a time, recording if it was either red or black – as if we were allocating a randomisation schedule.
It helped me to connect an intellectual idea to a tangible experience, bolstering understanding. Perhaps in future sessions, more of these activities could take place.
Every researcher was patient and encouraged us to ask questions, giving us plenty of time to do so.
Sessions were two hours over Zoom. In hindsight, it might have been nice to have had a bit more time. Sometimes, the researchers had to skip or rush certain topics so as not to overrun, and while I was okay with evening sessions, I would have liked at least one daytime session, as my brain does not function as well in the evenings.
As a former researcher myself, with the experience of talking to academics regularly, I was able to grasp the training concepts quite quickly.
Remembering when I first started, however, and research felt incredibly daunting, it was easy to think that research concepts were beyond my grasp.
In reality, most of the gap was created through the pace of conversation and unfamiliar technical lingo. Similar to feeling out of place in another country with an unfamiliar language and culture, for instance, it can take some time to learn and adjust.
By taking the time to teach LEAP members research concepts in an accessible fashion, members will subsequently be better able to communicate their experiences.
This is especially true in complicated projects. In our project, for example, without providing an idea of what confounders are, it would be difficult for us to help researchers recognise, through our experiences, the factors they would have missed.
Account 3 – George (PPI facilitator)
Following the LEAP through this process of training has been incredibly insightful for me.
I started the process much like the rest of the panel members, with only a minor level of background knowledge and understanding of quantitative analysis.
Through the trainings we have delivered so far, I have gone on my own journey of understanding, where I now feel like I have a good grasp of what this project is actually trying to accomplish. But I have also gained an insight into how best to bring people up to a level of knowledge which allows them to meaningfully contribute to the research.
This involves presenting technical ideas and concepts in plain and accessible language, and using activities and examples to make these ideas real for people. But it also means having some ambition for the ability of the panel members to understand technical concepts.
By keeping a high standard, and not oversimplifying things, we have been able to support LEAP members to reach a sufficient level of knowledge. As a result, they will understand exactly the kinds of analysis the research team will be doing, and exactly how they can impact this.
For example, the LEAP will help to create a DAG (Directed Acyclic Graph) to show all the possible confounders which can impact people’s choice of therapy, so they can be accounted for in the analysis.
Without an understanding of confounders and how they impact data analysis, the LEAP wouldn’t be able to meaningfully inform this.
More information about the project can be found on the official project website.