Understanding the Experiences of Early Career Researchers
In May 2022, the History & Philosophy of Science (HPS) program hosted Nicole Nelson, Associate Professor at the University of Wisconsin-Madison, who uses ethnographic and historical methods to study methods development and uncertainty in the biomedical sciences. During her visit, she delivered a public lecture, ‘Controlling the Interpretation of Replication Experiments’.
In the lecture, Nicole Nelson discusses how common reactions to unexpected experimental results can negatively impact researchers and slow the pace of self-correction in science. What does it mean when an experiment fails to produce the expected result? Does it indicate a problem with the original study or suggest a lack of skill on the part of the replicator? Deciding how to interpret these moments can have a profound impact on the reputation and wellbeing of the early career researchers who are typically in charge of performing experiments. This lecture discusses the ambiguities inherent to replication attempts and how the labour structure of science biases scientists towards attributing unexpected results to student error.
You can watch a recording of the lecture and read a follow-up interview with Forum’s Carl Joseph Sciglitano below.
Many of your major projects centre around the experience of graduate students and early career researchers. What interests you about this group?
I find them to be an interesting group of people to study because they’re getting enculturated into a field. It’s different to Principal Investigators, who’ve been there for a long time and they kind of have their shtick. These graduate students are actively becoming part of a knowledge community and are often faced with needing to undo some previous assumptions.
What struck me when I first started working on the reproducibility topic was the feedback I received from graduate students. I would give these talks about reproducibility and then several times I would have a bunch of students come up afterwards and remark, “that happened to me”. They would tell me it was actually very helpful to hear the broader history of the reproducibility issues because they realised that it was not something unique to them.
How has the challenge of reproducing other people’s work shaped the way graduate students perceive their own performance?
When I started this part of the project on early career researchers, I was thinking about the twin crises of high rates of error and problems reproducing studies, and the high rates of mental health incidence. Nobody was asking if those had any relationship to each other; so, that’s what I was interested in looking at.
When people talked about experiences where they were looking at something from the literature and they didn’t have enough information on it, it was perceived as an expected knowledge gap. Whereas when they failed to replicate something from their own lab or even their own prior work, that made students feel like it must be something particular with them. When we asked how the failure to replicate impacted them, they eventually talked about the mental health impacts, how they were up at night trying to figure out what they did wrong and not sleeping well or believing that they were a bad scientist and were never going to get it to work. Some people told us how they came really close to dropping out of grad school. Very poignant stories.
The interesting aspect about failures to replicate, to me at least, is it’s a fundamentally ambiguous situation where you could give a lot of potential explanations for what’s changed, and it would take you a long time to chase down all those explanations. So, in the absence of better data, people start to make assumptions about what they believe to be most likely. And where those assumptions come in is where I find the kind of sociological interest in looking at the patterns of assumptions people make.
Given the mental health challenges, what can be done differently to help overcome these feelings of failure?
Students tend to do better if they have a mentor who believes them and says things like “I don’t think this is you messing up. I think this is a real result.” Let’s talk about why this works. Unsurprisingly, when a mentor just flat out doesn’t believe them, then obviously that’s a lot harder for the student. What was surprising however was the reaction from students who went in with the expectation or the assumption that results were going to vary or that not everything that was published was true. These students were less surprised when it happened to them.
In a training video from the National Institutes of Health in the United States, Francis Collins, the director of the NIH, talks about his own experiences as a graduate student. He shares a story where he ended up crying in the bathroom after having a hard time acquiring competency in a new set of skills that he was dealing with. Hearing it from someone who is not only senior but also quite successful really helps normalise that. It’s a thing that happens to everyone. I think that points towards a way that you could potentially make this better.
The theme of ‘assumptions’ keeps coming up, in what students assume to be the reason for irreproducibility, or the assumed complexity of model organisms that you write about in your book Model Behavior. Is this an intentional thread you’ve woven into your research?
It definitely is. I will say, though, it’s actually more of a retrospective thread in the sense that like, for better or worse, I’ve tended to follow along with a certain group of theoretical interests and then also some empirical ones. Then, at a certain point, you look backwards at the path you’ve traced and realise that all these things come together. The thing that always seems to interest me is the way scientists build and deploy methods, and the assumptions that they make when they’re building up or using those methods. So, that touches obviously on a lot of things that could arguably be a lot of science.
How have you found your Melbourne visit and what’s next on the agenda for you?
There’s a real critical mass of people here who are interested in these issues about reproducibility and replication. And the repliCATS project plays a big part in bringing all those people together. This has been a great opportunity to give a series of lectures so I can get feedback and ideas from a lot of the other folks here on what might be promising directions to go [in]. I think there will be some paper collaborations that will probably come out of this hopefully as well.
One of the first things that I’m going to do is to take the feedback that I’ve gotten on these papers and write up two of them, prompted by a great discussion with Fiona Fidler and her group. I’m then going to dig a lot more into the history of the statistical debate in replication. I think this visit has really enriched my work in that way.
Any parting words for our readers who perhaps may be thinking of going into either HPS or STS studies?
For people who might find themselves interested but feel that they don’t necessarily have enough expertise or background to be able to engage, I’d say come check it out. It’s an interesting field in that almost everyone falls backwards into it from some other thing. It’s kind of like a trap door to a different land. So, fall and come down the rabbit hole. It’s an interesting place to be.
Nicole C Nelson’s book, Model Behavior, takes an inside view at a team of scientists researching the genetics of alcoholism in mice and how their work creates and manages foundational knowledge in the field. Sections of the book and other publications are available on her website. Nelson’s work on reproducibility is currently available as a preprint.