Introducing Constructivism

What is it? 

Epistemology is the branch of philosophy that studies knowledge. Constructivism is a type of epistemology – a philosophical viewpoint about how we know about the world. The idea of philosophy may turn some of you off, but bear with me – this is one of the cornerstones of my research.*

Constructivism is in between realism and solipsism. Realism is the belief that there is a world external to our perceptions, and that our senses impart awareness of that world. Solipsism, essentially the polar opposite, holds that the mind is the only thing that can be known for sure, and knowledge about anything beyond the mind is uncertain.

The basic idea of constructivism is that our knowledge of the world is composed of mental models that explain the way we experience the world. This differs from realism because constructivism doesn’t model the external world itself, rather our perception of it. Constructivism also differs from solipsism, because it doesn’t deny the existence of a real world external to our perceptions. However,  it holds that our only experience of that world is mediated by our senses. Therefore, it is impossible to have knowledge of the external world as it exists beyond our senses.

Why is it interesting for science? 

Let’s think of scientific knowledge specifically as a collection of constructed models. If this is the case, what would that entail?

First, we can ask what the purpose of our model is. The general answer is to make sense of our sensory perceptions – to explain how we experience the world. But this answer, for many, includes an unstated assumption: by making sense of the world, we enhance our ability to change, influence, or control it. This reveals another purpose of model construction: control. In the case of disease, for example, the reason we do research is to cure, treat, or prevent disease. This line of thinking unearths many assumptions to be questioned: what counts as an explanation, or a cause? How is that related to purpose? How is it related to understanding?

Next, if these models represent the way we interact with an external reality (rather than the external reality itself) that introduces the idea that multiple, differing models of the same thing can happily coexist and both be “right”. In that case, the differences come from different observers, and perhaps the different types of measurements they have chosen to take. This is a radically different way of thinking about scientific knowledge than the traditional view of objective representations of reality, separate from any observer.

If we question the objectivity of science, we must explore the consequences of acknowledging that it is subjective or arbitrary. This is easy to see in some situations. For example, whenever a subject is being researched, there are infinitely many questions that may be investigated. However, the scientist chooses one or a few to focus on. How is that choice made? It is subjective, based on the valuations and justifications and existing models of reality of that individual.  Even the most rational exploration of possibile questions must narrow the field of possibilities in an arbitrary manner at some point.

This line of questioning could continue ad infinitum. But most scientists don’t stop to think about the way they do science, or the origin of their mental models. This is the impetus for second-order science, a new domain of science founded on the ideas of constructivism. The aim is to expand the scope of science, leading to innovation and increased reliability and robustness in research.

One of the new aims of my project is to apply the ideas and methods of second-order science to biology, focussing specifically on immunology. While I am very aware of the role that philosophy has to play in this endeavor, I suspect it will not be a focal point in the work I produce, simply because at this stage it would be a distraction for a biological audience. Nevertheless, I will continue to post about the philosophical roots of my project, because they have a huge impact on the way it is developing.


*I am not an expert in philosophy by any means: these descriptions are necessarily limited.

Photo credit: Michael Kalus via / CC BY-NC-SA

Why I enjoyed Medawar’s “Induction and Intuition in Scientific Thought”

Peter Medawar was by all accounts a brilliant and witty man. He won the Nobel Prize in 1960 for his work on graft rejection, which had a huge impact on the field of immunology. His wit and personality come through in this little volume, originally given as a series of lectures in 1968. I found this book to be rich in ideas, and still relevant today. I begin with a summary of the book (with comments), then describe its place in my research. In many cases I will quote him directly, mainly because I enjoy his prose.


Medawar states in the preface that the lectures “began in my mind in the form of a question: why are most scientists completely indifferent to – even contemptuous of – scientific methodology?”(p. vii)*. It wasn’t until this past year that I learned the definition of methodology. It is not, as I had previously thought, a fancy word for methods, or a set of methods. It is the analysis of methods. Of course individual methods are analysed carefully for effectiveness (eg PCR or flow cytometry), but that’s not what Medawar is talking about. He’s referring to “The Scientific Method” and the way we think when we do science.

Chapter 1: The Problem Stated

In the first chapter, Medawar explains the question. What exactly do scientists (specifically biologists) actually do to make scientific discoveries? He argues that most scientists are themselves unable to answer this question. The few who have tried either produce misrepresentations or are not scientists at all but lawyers, historians or sociologists (the notable exception being William Whewell, a biologist, who Medawar refers to repeatedly). Nevertheless, scientific discovery continues! So why bother with scientific methodology at all? He suggests  it would address questions of (1) validation, (2) reducibility and emergence, and (3) causality, which are of interest to all sciences (even the social ones).

Chapter 2: Mainly About Induction

In this chapter Medawar argues that induction, long referred to as the core of the scientific method, simply isn’t. This is not a new or unique argument, and he explains why: “Induction, then, is a scheme or formulary of reasoning which somehow empowers us to pass from statements expressing particular ‘facts’ to general statements which comprehend them. These general statements (or laws or principles) must do more than merely summarize the information contained in the simple and particular statements out of which they were compounded: they must add something … Inductive reasoning is ampliative in nature. … This is all very well, but the point to be made clear is that induction, so conceived, cannot be a logically rigorous process. … No process of reasoning whatsoever can, with logical certainty, enlarge the empirical content of the statements out of which is issues.” (pp. 23-4)

One of Medawar’s problems with induction is that it doesn’t account for the critical use of experiments. He describes four different types of experiments:

  1. Inductive or Baconian, what I would call exploratory experiments, of the type ‘I wonder what would happen if…’. These are not critical experiments. They are meant to “… nourish the senses, to enrich the repertoire of factual information out of which inductions are to be compounded.” (p. 35)
  2. “Deductive or Kantian experiments, in which we examine the consequences of varying the axioms or presuppositions of a scheme of deductive reasoning (‘let’s see what would happen if we take a different view’).” (p. 35)
  3. “Critical or Galilean experiments: actions carried out to test a hypothesis or preconceived opinion by examining the logical consequences of holding it.” (p. 37)
  4. “Demonstrative or Aristotelian experiments, intended to illustrate a preconceived truth and convince people of its validity.” (p. 37)

His argument is that multiple stages of experimentation are necessary in the course of original research, and critical experiments (type 3) are necessary to progress beyond “academic play” (p. 38). The version of the scientific method I was taught in school described only critical experiments – I consider that to be just as serious a misrepresentation of science methodology as only including Inductive or Baconian experiments.

Looking at this list from a modern perspective, I wonder whether we are able to distinguish so clearly between critical or demonstrative experiments. What exactly separates the two? Is it intent? If so, how are we to judge another’s experiments simply by reading a paper in a journal? The conclusion I have drawn after reflecting on this list is that published papers may be a good way of disseminating results, but they are very poor at representing the methodology of science. As Medawar says later in the book, “The critical process in scientific reasoning is not … wholly logical in character, though it can be made to appear so when we look back upon a completed episode of thought.” (p. 53 ). In other words, papers present a logical progression that is seen only in hindsight, and doesn’t reflect the reality of research. Baconian experiments are necessary to embark on any new area of research, but they are rarely published unless something extraordinary is stumbled on.

Medawar goes on to explain the specific shortcomings of induction as a methodology, at the same time highlighting the requirements of a good methodology. This is his summary at the end of the chapter:

  1. “Inductivism confuses, and a sound methodology must distinguish the process of discovery and of justification.
  2. The evidence of the senses does not enjoy a necessary or primitive authenticity. The idea, central to inductive theory, that scientific knowledge grows out of simple unbiased statements reporting the evidence of the senses is one that  cannot be sustained.
  3. A sound methodology must provide an adequate theory of special incentive – a reason for making one observation rather than another, a restrictive clause that limits observations to something smaller than the universe of observables.
  4. Too much can be made of matters of validation. Scientific research is not a clamor of affirmation and denial. Theories and hypotheses are modified more often than they are discredited. A realistic methodology must be one that allows for repair as readily as refutation.
  5. A good methodology must, unlike inductivism, provide an adequate theory of origin and prevalence of error…
  6. … and it must also make room for luck.
  7. Due weight must be given to experimentation as a critical procedure rather than as a device for generating information; to experimentation as a method of discriminating between possibilities.”(pp. 40-41)

Chapter 3: Mainly About Intuition

In the final chapter, Medawar makes a case for a “hypothetico-deductive” scheme of science methodology, originating from many thinkers including Kant, Robert Hooke, Stephen Hales and Robert Boscovich, and advocated in Medawar’s time by Karl Popper. He details how each of the 7 requirements for a better methodology are met by this model, but I won’t get into that here. His focus, and I think the more interesting aspect of the chapter, is on the role of intuition or creativity in this model. “Scientific reasoning is an exploratory dialogue that can always be resolved into two voices or two episodes of thought, imaginative and critical, which alternate and interact.” (p. 46).

Again, he describes four types of creativity (not ruling out the existence of more): deductive, inductive, wit, and experimental flair. The details are less important than the conclusions he draws from their existence: “… an imaginative or inspirational process enters into all scientific reasoning at every level.” (p. 55). “That ‘creativity’ is beyond analysis is a romantic illusion we must now outgrow. It cannot be learned perhaps, but it can certainly be encouraged and abetted. We can put ourselves in the way of having ideas, by reading and discussion and by acquiring the habit of reflection, guided by the familiar principle that we are not likely to find answers to questions not yet formulated in the mind.” (p. 57).


Some thoughts

I loved reading this book. Every section seemed to cut right to the heart of an issue and reveal it starkly, I think in large part to Medawar’s lovely style of writing.  The questions he identifies as important across scientific disciplines (validity, reduction and emergence, and causality) remain relevant today, and I have a lot of personal interest in them. Beyond that, the overall premise of the book is closely related to my research. A large part of my work at the moment is focused on second order science. As defined on the website: “First-Order Science is the science of exploring the world. Second-Order Science is the science of reflecting on these explorations.” It seems to me that Medawar is doing exactly that when he talks about science methodology.

Throughout the text Medawar advocates self-reflection, or reflexivity, in science. Unfortunately, most scientists remain as unconcerned with such things today as they were 50 years ago – that’s the domain of philosophers of science – despite the potential implications. What would be the benefit of engaging with science methodology in the way Medawar did? He says: “Most scientists receive no tuition in scientific method, but those who have been instructed perform no better as scientists than those who have not.” (p. 8). Could we change that state of affairs? Could we teach ourselves to be better scientists if only we could describe what we are doing?

In the past year I have done a lot of reading about systems theory, with a bit of complexity science and cybernetics thrown in there. A key part of General Systems Theory, as originally defined by Ludwig von Bertalanffy, is the idea that there are patterns and general rules that can be found and applied in systems across all disciplines. In that sense, it is a transdisciplinary theory. Cybernetics, as defined by Norbert Wiener, is the study of communication and control, now referred to as the study of regulatory systems. It is also transdisciplinary, and closely related to systems thinking in intellectual lineage. Given that background context, I was very excited to read the following passage (pp. 54-55):

“There is nothing distinctively scientific about the hypothetico-deductive process. It is not even distinctively intellectual. It is merely a scientific context for a much more general stratagem that underlies almost all regulative processes or processes of continuous control, namely feedback, the control of performance by the consequences of the act performed. … scientific behavior can be classified as appropriately under cybernetics as under logic.”

This observation, combined with his repeated suggestion of reflexivity, shows that Medawar was thinking in terms of second-order science. This is a lovely example of synchronicity, or the same idea occurring separately in many places at once, because a group of cyberneticians described second-order cybernetics shortly after.

The more I think about what Medawar wrote, the more I link it to the rest of my research. I will certainly be referring to Induction and Intuition in Scientific Thought in future posts, and I highly recommend it if you have any interest in the way scientists think.


*All page number citations refer to:

Medawar, P. B. 1969. Induction and Intuition in Scientific Thought. Vol. 75. Memoirs of the American Philosophical Society. Trowbridge & London: Redwood Press Limited.


Photo credit: DiariVeu – via / CC BY-NC-SA

Niche: identified

The most recent Thesis Whisperer post featured a list of unsolicited advice received by a newly minted PhD student before starting her degree. At this point, a year and a month into my PhD, the piece of advice I found most salient is “find a niche and follow your passion.” That about sums it up.

Reflecting on this past year, I can certainly say I have followed my passion. I chose to develop my own project from scratch (with the enthusiastic support of my supervisors) guided by nothing but passion. Of course there were moments (or stages, even eras) when I lost confidence, but I stuck with it and passion carried me through. But it wasn’t enough to convince anyone – from the very inception of my project I’ve been met with confusion (and hesitation) by most everyone except my supervisors. Of course, that makes perfect sense when you factor in the total lack of specificity: my passion is very general. To the point of being kind of useless on its own. And that’s what the niche is for!

I have been doing “literary spelunking” for over a year now. I read broadly, and deeply, and learned a lot of interesting things. But at some point even I began to despair that I would never be able to narrow down – cue some existential anxst. It took patience and some hard thinking, but eventually I realised that there were certain ideas that just kept popping up – and not only that, but they were actually related. When I started thinking about those relationships, I started to define my niche. Today, in a very animated meeting with my supervisors, we drew it out in a venn diagram. And there it was, on the whiteboard. Finally.

The point of a niche is not to be restricted to it – I haven’t lost general passion because I’ve now identified a niche. But I’m grounded in it. My passion is guided and made useful by the boundaries it provides. It’s my intellectual home base. And, for bonus points, I actually get to build it! Maybe one day it will expand into a full-fledged city.


Photo credit: madelyn * persisting stars via / CC BY

Chickens, eggs, and mutual causality

Which came first, the chicken or the egg? The way this question is phrased implies a linear causality: one exists first, and then gives rise to the other (A → B). We have a tendency to see things this way. We look for root causes, explanations that can be traced back to one person, event, bacteria or machine. If we can find that cause, we can solve problems and make sense of the world. This strategy has worked well for us – we use antibiotics to kill bacteria, we stopped using the chemicals that were thinning the ozone layer, we learn in school that the assassination of Archduke Frans Ferdinand triggered WWI and that the invention of the steam engine led to the industrial revolution.

But we struggle with the chicken and egg problem. Which came first? Neither. Our difficulty in answering the question betrays the game – there is no linear relationship here. It’s more like A ↔ B. The chicken and egg reproductive process is the result of a long evolutionary history, each iteration influenced by the environment and the one that came before. If we take small steps back in time, the difference between each iteration is so small as to be indistinguishable. But take a leap back, and you perceive a difference. At what point in that history of development can we draw a line, when either side of that line will always look basically the same?

We don’t like this kind of problem. It’s harder to think about and it’s harder to solve. If you need experience to get a job, and you need a job to get experience, what should you do? If you need enzymes to make proteins, and enzymes are proteins, how did the first enzyme get made? What about the origin of life?

We have struggled with these questions for centuries. Creation myths and religious explanations are part of every culture, but even they leave us wanting more. My grandfather was a pastor, and he believed that the world was created by God. I asked him once where God came from, and he told me a story.

A pastor was walking on the beach one day, asking himself who or what had created God. He encountered a boy, running back and forth between the ocean and a hole in the sand. The boy was carrying a pail, and filling the hole with water from the sea. “What are you doing?” the pastor asked. “I’m putting the ocean into this hole,” the boy replied. “But the ocean is much too big to fit in that little hole!” he said. And the pastor realised that questioning the origin of God was much the same – too big a question to fit in a human mind.

This is the problem with linear causality. If we search for the one start, cause, origin – there is always something that came first, some earlier start, cause, origin.

Another way of looking at relationships is to see mutual or circular causality. In systems theory this relationship is described using feedback loops. In Buddhism, it’s called Pratītyasamutpāda, or dependent co-arising. Joanna Macy describes the concept well in her book Mutual CausalityMy intent here is not to describe mutual causality but to discuss the potential impact of seeing relationships in this way. Using the chicken and egg example, we can say that the chicken and egg came into existence together, co-dependently, each creating the other iteratively over time. We see the relationship between them in a new light, and can then start to ask different questions. To me, this is the core of complex systems theory.

In immunology, the field my research project focuses on, one of the aims of research is to contribute to disease prevention and treatment. Many of the most widespread treatments currently rely on linear causality to identify treatment targets: antibiotics, antivirals, immunosuppressants. But what we experience as disease isn’t just caused by a pathogen in a linear way, it arises from the interaction of our bodies with the pathogen. The same bacteria could make you sick or not depending on where it enters your body, in what quantity, at what time in your life, and with what other microbiota. Whether you experience symptoms of sickness depends on how your body, in the condition of that moment, interacts with that pathogen, as it appears in that moment.

It may seem like adopting this perspective would make it impossible to find solutions, but it actually presents more opportunities for interventions and treatments. Vaccines are the most prevalent example of medical intervention that takes a systemic approach. A vaccine relies on the inherent structure of the immune system and an immune response to perturb the system to a different, desired state. When we get an infection and then experience sickness, the pathogen is perturbing the body (a complex system) in such a way that it experiences a state change, from healthy to ill. A vaccine perturbs the system to a state of immunity from that particular pathogen. There are many potential ways to perturb a system, and those can be exploited for novel treatment pathways. One example is the addition of bacteria to treat unwanted bacterial infections or chronic inflammation (you may have heard of fecal implants being used to treat IBS).

Treatments are certainly getting more creative, and I don’t mean to imply that nobody is finding novel solutions or using systemic methods. However, I think much of the research that goes on (eg finding drug targets) is still based in a conceptual framework of linear causality. The complexity and interconnectedness of biological systems is becoming more apparent as we gather more information, but widespread adoption of a systems perspective has not yet occurred. Innovations like bacteriotherapy arise in part from urgent concerns like increasing antibiotic resistance, and the pressing need to find alternative solutions. I argue that such innovation would be more easily attained if our conceptual frameworks included mutual causality. A post from the blog Emergent Cognition explains it well: “Although one perspective isn’t inherently better or worse than another, each reflects a way of seeing with unique affordances and constraints on how we think.” We can only gain from broadening our perspectives to include circular causality and complex systems thinking.


Photo credit: Infomastern via / CC BY-SA

The Hiatus or Rethinking why I blog

To commemorate the return of the blog, a brief account of The Hiatus, in 5 acts.

Act I: The taking stock, in which I follow New Years tradition

I decided to take some time over the holiday season to think about this blog, and whether there were any changes I could make for the new year. I had a feeling that the Week in Review posts weren’t quite right, but couldn’t put my finger on it. I revisited my reasons for blogging and read everything I had written so far.

Act II: The abashed confession, in which I am disappointed in myself

My previous posts accurately reflect the reality of my research experience last term. I spent a lot of time learning about and trying to develop soft/supporting skills. I took the time to think about what I was learning, incorporate it into my daily behaviour and future planning, and write about it. In the research realm, I read and took notes on my readings, but went no further. I balked at taking the extra steps of analysis, integration and writing. I had a feeling this imbalance was forming throughout the term, but looking back with hindsight it’s almost painfully clear. I simply wasn’t working hard enough on my research.

Act III: The report, in which I have to write (dun dun dun)

My first official progress review was in mid-January, and required a written report. Writing that report was extremely difficult for me, even though it was only 2,000 words, and a good 1/3 of it came directly from my project proposal. Every sentence was like pulling teeth. I started working on it weeks in advance, but because it was so horrible I made little progress until the looming deadline forced me to work through the pain. I spent most of my time and energy trying to crystallise one new idea and incorporate it fluidly into the introduction and the project framework. This was a tricky task, and the result could have been much better had I invested more time into it. In the end, I wasn’t thrilled with the quality of the report, but it was passable.

Act IV: The meeting, in which I realise the value of writing

The meeting went really well. The report was used as a starting point for discussion, and we had a very lively and productive talk. Several participants have outside perspectives, so were able to identify weaknesses or areas that needed more clarity in the project. I walked out feeling energised and motivated. The experience taught me two essential things about the way I work and had been working:

[1] If I had gone into the meeting without having written the report, my arguments would have been poorly constructed and my participation in the discussion wouldn’t have added much. As I read and learn, all that new information isn’t falling onto a blank slate, it’s joining a growing network of information and making new connections. That network is full of inchoate ideas. The trick is to develop them into actual ideas that can be shared in words. This doesn’t happen automatically – it takes work, and it can be really hard, but it is absolutely necessary. Without that work, I would remain incoherent. Writing forces me to do the heavy lifting and get that work done.

[2]  The act of writing something down seems so final to me – as a consequence I have been afraid to record ideas that I know will change. But nobody in that meeting was expecting immutable truths in my report. Nothing I say or write now will be held against me in the future if it changes. I am free to use writing as a tool in the process.

Act V: The road ahead, in which I shake off the self-pity and get constructive

So I messed up a little last term by not working hard enough in my research, and not biting the bullet and writing. The good news is that I’ve realised my mistake, and it has actually turned into something pretty exciting for the future of this blog. My original reasons to blog included documenting the research process and sharing my ideas to develop courage. Now I think it’s appropriate to take it a step further. Writing this blog can be an active part of my research, and allow me to develop my ideas. I’m not sure how this will shake out because – to be frank – my project is hard. I am finding it difficult to do, and even more difficult to talk about with other people. But, as The Hiatus ends, I will go ahead anyways; everything doesn’t have to be figured out ahead of time.


Photo credit: anieto2k via / CC BY-SA

Week in Review: Researcher Development Edition


Last week was full of holiday season campus celebrations. Coming from the US, mince pies are still a novelty to me – or at least they were, before I ate about a dozen of them. So this week, my tea (8) and coffee (7) intake was supplemented with mulled wine. Maybe this should become a regular thing, because I had a pretty good week! I practiced yoga for 2 hours and spent 132 minutes sitting on the exercise ball, although once again I have not begun meditating. I spent 147 recorded minutes writing (plus some extra) and read 5 papers. My goal is to read many more than 5 papers in a week, but my academic reading was so slow when I started and this is a significant improvement.

Researcher Development Edition

Last week I mentioned the Researcher Development Framework (RDF) developed by Vitae. It articulates the skills one needs to develop over time to be a well-rounded researcher, and is used by my university to help students with personal and professional development.




I have been shown this wheel in several mandatory training meetings over the past year, and I expect I will be seeing it again. The Researcher Development team at the university are very keen on it, and my department actually requires us to spend 10 days per year on training/development activities. In order to get our degrees, we must write a summary of what we have done to develop our skills in each domain. A lot of people find this to be useless box-ticking, and dislike the time it takes away from research. But I think it’s performing a valuable and difficult service: to spell out otherwise tacit knowledge.

There are many unwritten rules in academia. Things like teaching, paper submission and review procedures, building collaborations, and professional etiquette are all “learn as you go” skills that you pick up through experience. Learning by doing may be effective, but a little preparation never hurt either. That’s why I like the RDF – it tries to explicitly point you in the right direction. The full report online has more detailed descriptions of each area, including development over time. I can look at the skills and abilities listed in Phase 1 or 2 and ask myself whether I feel competent at them, and identify where I need to expend effort to develop further. I can also look ahead to Phase 3 or even up to 5, to plan a route of skill development, especially in the areas that are more relevant to my long-term career goals. The university offers a host of researcher development seminars and workshops that I can sign up for once I’ve identified a gap in my skill set. Or, I can approach my supervisors for advice, armed with specific questions. “Do you think I’m rigorous in argument construction and production of evidence? If not, can you suggest ways that I can specifically develop that ability?”

Another reason I like the RDF is that it describes transferrable skills (communication, organisation, self-management) that are useful for any career. While many supervisors don’t discuss this with their students, the fact is that there are very few permanent jobs available in academia – many of us working towards a PhD now will end up in industry, or in another field altogether. Being able to identify skills that we have developed beyond lab work or experimental design and can apply to any career is helpful, and makes the uncertainty of the future less daunting.


Week in Review: MOOC Edition


Over the past week I spent 1 hour doing yoga, 104 minutes sitting on my exercise ball, and 0 minutes meditating (again, darn!). I spent the week in the afterglow of the symposium, which I think was quite successful. I attended mandatory training about academic integrity and ethics (which featured some pretty shocking and public stories of falsifying data), and another training about the 4 domains of the Researcher Development Framework described by Vitae, which I will talk about in another post. I drank  11 cups of coffee and 7 cups of tea.

Last week I wrote about organising a symposium – I volunteered to organise externals speakers to overcome my fear of presenting myself professionally over email to strangers. I found this article (after the symposium ended, of course) about writing emails to be very useful. I know how important it is to present yourself well over email, but I often get so nervous about it that I end up not sending the email at all, which is just shooting myself in the foot.

MOOC Edition

I have mentioned before that I’m participating in a MOOC called How to Survive Your PhD, run by the marvellous Inger Mewburn of Thesis Whisperer fame. Each of the 10 modules focuses on a different emotion and how it manifests in the context of PhD research. While the readings are interesting, they are very light. The real value for me has been in reading the discussions (unfortunately I’ve joined the course after it finished in real time, so it’s too late to actually participate in the discussions) and watching the videos of the live chats. That is where you find the community, the advice, the tips and tools and tricks that make life as a PhD student easier and less lonely.

It is amazing the number of times people commented “I’m so glad to know I’m not the only one who feels this way” or “I’m so happy what I’m going through is normal”. I have definitely seen a certain level of awareness among my peers of issues like imposter syndrome, depression, and fear of writing. However, it’s always discussed in very vague terms. Nobody says “I am feeling _____, do you ever feel that way too?” And there is certainly no discussion of how to deal with these emotions ourselves! Of course there are a lot of resources available on campus, but the value of a supportive community of peers collectively looking for ways to help each other is underrated.

The internet is such a vast and sometimes overwhelming place, but it’s gems like this that really make me appreciate technology and the ease of communication we have as a result of it. Thanks to the community I’ve found in the MOOC, not only do I feel more confident and supported personally, I now have more concrete ideas and tools to bring to my peers. I have avoided being active in the PostGrad community at my university because I didn’t feel able to contribute anything or socialise well, but that has changed!

The course is still available through EdX, and the communities that have sprung up around it are very welcoming! Come join us!


Photo credit: exitstential via / CC BY-NC-ND

Week in Review: Symposium Edition


I did not go to yoga, or practice meditation, at all last week. I spent 41 minutes sitting on my exercise ball, demonstrated for 6 hours, finished 2 books, forgot to keep track of how long I spent writing, drank 12 cups of coffee and 9 cups of tea (during working hours).

I found a great blog post, Literature Review for Beginners on Doctoral Writing SIG, which describes how to get an effective start on a literature review, even when the direction of your project is still uncertain. I found it very helpful, as I’ve been a bit shaky on how to start my lit review.

Susan suggests thinking critically about how literature can be used to support the overall argument of your thesis, beginning with two sections:

  1. The problem – the gap in knowledge that your thesis will fill and why it is important. This is the beginning of the introduction.
  2. The methods – details from previous approaches, including benefits and limitations, to develop/defend your own method.

In addition, one can note good writing – a clear description of a complex theory, or well articulated analysis – and learn from it. I hadn’t considered this before, I think it’s great advice!

After changing reference managers (Mendeley to PaperPile),  I spent a long time tagging my papers. The tagging feature is the reason I switched to PaperPile; it makes a hierarchical file structure redundant, which I think is an improvement.

I completed the Fear and Curiosity modules in Surviving the PhD MOOC, which I continue to find helpful and, to be honest, a great mood-booster. Although I found out about it too late to participate in the course in real-time, reading the discussions and watching the live chat videos even after the fact make me feel the sense of community on the course.

Symposium Edition

The 3rd CID Interdisciplinary Symposium, created and organised entirely by students on my program, was held yesterday. That means a large chunk of last week was spent on final preparations, and some unnecessary hand-wringing on my part. We were in fact very well-prepared and the event ran beautifully. While I’m glad it’s over, I also enjoyed the day very much.

I participated in the symposium in three ways: as an organiser, as a session chair, and by giving a flash poster presentation. Presenting and chairing in front of a large audience were good presentation practice, which I absolutely need. I love hearing the eloquent, confident speakers, and am determined to be one of them someday. But where I gained the most, personally, was being on the organising committee.

Fortunately, we were able to learn from the experience of the previous two years, and to build further on their foundation. Based on advice from previous organisers, we began by very clearly defining roles for everybody, including an official chair. Having the right person in charge is so important – they need to be able to see the big picture, delegate, and be organised. If done right, it is a lot of work, but the chair shouldn’t be doing other people’s jobs for them. Our chair organised meetings, took detailed notes at each one, and sent out minutes along with individual to-do lists afterwards. At the beginning of each meeting, we would address every point on every to-d0 list before we got on to new business. That sounds almost authoritarian, but it really kept everyone on top of things and let us do our own work without worrying whether other people were pulling their weight.

We were lucky that everyone on the committee was enthusiastic and did their job. I know many people have had terrible experiences with “group” projects, but working in a team doesn’t have to be so difficult. Knowing exactly who is responsible for doing what really helps prevent any tensions.

Because of the hard work done in previous years, we were able to focus more on getting external poster submissions, and even an external PhD speaker. Since it was a one-day symposium, it makes sense that a lot of the participants were from our institution, but the hope is to broaden attendance in future years, and increase PhD student participation. This year we focussed on inviting students from other Wellcome Trust funded programs, and had mild success. I hope that next year we can do even better.

It was my job to invite the external speakers and organise logistics with them. I volunteered for it because the thought of writing an email to an academic I’ve never met was absolutely terrifying, so I thought I should do it and get over the fear. We contacted them about 6 months in advance, because timing had been an issue in previous years. We were extremely lucky that all of our first choices accepted the invitation, and none of them dropped out. Communicating with them was also very pleasant – they were approachable and friendly, so  I’m glad I faced that fear.

If you’re thinking about starting your own symposium, do it! It’s hard work, but so rewarding – like a boot camp for well-rounded skill development.

Photo Credit: Leo Caves

Harnessing collective intelligence

I was recently introduced to a method of collaborative problem-solving (or idea-generation) called World Café. The basic idea is to explore a specific question using collective knowledge, by bringing people physically together for a day or two, with the premise that the answer being sought already exists in the room.

Participants sit at small tables  in groups of ~4. During each of several time periods, groups are given a prompt to discuss. Participants move to different tables, as individuals rather than in groups, and continue the discussion with a new prompt for each time period, referring back to conclusions drawn or ideas generated in previous rounds.

The excitable part of me wants to say that this is intentionally engineered emergence! We can solve all our problems this way! Everyone should be doing this!

The World Café format has great potential, and some clear advantages over the current form of problem solving and question answering seen in academia. But, realistically speaking, it also has some limitations, and could be developed further.

In academia we rely heavily on synthesis of collective knowledge. The days of one person learning all there is to know in the world are long gone, as the amount of knowledge and information being produced increases exponentially with population numbers and technological advancement. The sheer number of individuals doing research and papers being published means that it is literally impossible to read every relevant article in one’s field. Specialisation in science means that individuals have detailed knowledge of a very small slice of a big pie.

So the big questions get split up into smaller questions and those get broken into sub-questions and each one of those can be addressed experimentally and the answer shared (eventually) with the rest of the community via published papers and conferences, where one presents results and discusses work with others.

One result of this system is that ideas spend a lot of time in relative isolation, shared widely only when they are fully-formed and publishable.* Ideas can also become part of personal identity, and as such when they are shared they must be defended. In fact, they have to be defended even at their most nascent stage, in the form of grant applications.

The collective conversation, then, becomes one between individuals who each have their own idea to defend. Any collaborations that are formed, or new ideas that are generated, must build on the existing ones, and can’t be allowed to threaten them too severely, because such threats become a personal attack.

In the World Café setup, there is the opportunity to avoid this kind of dynamic. The first step is to have participants who are willing to come with open minds. Having an interdisciplinary gathering, and combinations of practitioners and academics, for example, can be very beneficial. In addition, the idea sharing and growing happens in real time, and it is clear by the end of the session that each individual has contributed to a greater whole rather than everyone having separate ideas.

However, the initial meeting isn’t enough to really solve a problem or change how collaboration is occurring on a more systemic level. My supervisors participated in a workshop organised this way, and their main critique was that after the buzz and energy of the day, one needs to go off and work on the nitty-gritty aspects in the context of their specialty, while maintaining the input from other participants. In practice, this doesn’t happen so easily, and isn’t included in the process as a whole – rather, it’s left to each individual.

One way to address this issue could be an iterative cycle where participants meet, buzz,  disband and distill the output into their work, then meet again to incorporate the distillations. Of course, such a setup would require a long-term commitment and a lot of logistical headache. Nevertheless, I think this format is very promising, and I would love to set up a similar style workshop as part of my research.


*Of course part of this has to do with secrecy, and the desire not to be “scooped”.

Photo credit: ARRRRT / / CC BY

Literary spelunking

I said I would spend the first few months “exploring”. This is what the first few weeks look like:

I started off with the book Tending Adam’s Garden by Irun Cohen. I enjoyed the clear development, built using carefully defined language, of the book’s conceptual framework. I will write a review post dedicated to this book soon.

Cohen presents a very different overarching view of the immune system than that shown in textbooks, specifically highlighting differences with Clonal Selection Theory (CST) specifically. While I’m familiar with the general concept of CST, I have no knowledge of the specifics, or the research that has been done to develop it.

To remedy that, I am delving into the immunology textbook used at my university, Janeway’s Immunobiology, 8th Edition, by Kenneth Murphy (published fairly recently, in 2012). I have begun with the development and selection of B and T cells, what I consider at the moment to be the core of CST. As I read, I keep an eye out for behaviour that is poorly understood and the subject of active research.

In the textbook, I have encountered an interesting concept called “clonal ignorance” or “immunological ignorance” relating to B cells. I had little success finding papers on Google Scholar or Web of Knowledge that use these terms. Many of the ones I did find were from the 1990s. I find this curious and intend to explore further. I suspect these may be out-dated terms that haven’t been updated in the newer textbook editions to match the current literature.

After a supervisory meeting, my planned reading topics have expanded.

The initial question that sparked the idea for my project is how we define “self” in terms of immunology. There was a conference last year called “Redefining the Self” that explored this question in detail, and all of the presentations are available on the website with audio available on request. I’m pleased that I can hear these talks despite having missed the conference!

Systems theory is another area where I have to do some reading. The text suggested as an entry point was Observing Systems by Heinz von Foerster, a collection of essays published in 1982. The book is almost impossible to find, I discovered, but a later collection with many of the same essays, Understanding Understanding, should do the trick.

I think that my lack of expertise in these fields is a bonus at the moment. I’m scooping a bit of information out of each discipline, linking them together as I go, without too much baggage of preconceived notions.

Photo credit: Bart Heird / / CC BY-NC-ND