Introducing Constructivism

What is it? 

Epistemology is the branch of philosophy that studies knowledge. Constructivism is a type of epistemology – a philosophical viewpoint about how we know about the world. The idea of philosophy may turn some of you off, but bear with me – this is one of the cornerstones of my research.*

Constructivism is in between realism and solipsism. Realism is the belief that there is a world external to our perceptions, and that our senses impart awareness of that world. Solipsism, essentially the polar opposite, holds that the mind is the only thing that can be known for sure, and knowledge about anything beyond the mind is uncertain.

The basic idea of constructivism is that our knowledge of the world is composed of mental models that explain the way we experience the world. This differs from realism because constructivism doesn’t model the external world itself, rather our perception of it. Constructivism also differs from solipsism, because it doesn’t deny the existence of a real world external to our perceptions. However,  it holds that our only experience of that world is mediated by our senses. Therefore, it is impossible to have knowledge of the external world as it exists beyond our senses.

Why is it interesting for science? 

Let’s think of scientific knowledge specifically as a collection of constructed models. If this is the case, what would that entail?

First, we can ask what the purpose of our model is. The general answer is to make sense of our sensory perceptions – to explain how we experience the world. But this answer, for many, includes an unstated assumption: by making sense of the world, we enhance our ability to change, influence, or control it. This reveals another purpose of model construction: control. In the case of disease, for example, the reason we do research is to cure, treat, or prevent disease. This line of thinking unearths many assumptions to be questioned: what counts as an explanation, or a cause? How is that related to purpose? How is it related to understanding?

Next, if these models represent the way we interact with an external reality (rather than the external reality itself) that introduces the idea that multiple, differing models of the same thing can happily coexist and both be “right”. In that case, the differences come from different observers, and perhaps the different types of measurements they have chosen to take. This is a radically different way of thinking about scientific knowledge than the traditional view of objective representations of reality, separate from any observer.

If we question the objectivity of science, we must explore the consequences of acknowledging that it is subjective or arbitrary. This is easy to see in some situations. For example, whenever a subject is being researched, there are infinitely many questions that may be investigated. However, the scientist chooses one or a few to focus on. How is that choice made? It is subjective, based on the valuations and justifications and existing models of reality of that individual.  Even the most rational exploration of possibile questions must narrow the field of possibilities in an arbitrary manner at some point.

This line of questioning could continue ad infinitum. But most scientists don’t stop to think about the way they do science, or the origin of their mental models. This is the impetus for second-order science, a new domain of science founded on the ideas of constructivism. The aim is to expand the scope of science, leading to innovation and increased reliability and robustness in research.

One of the new aims of my project is to apply the ideas and methods of second-order science to biology, focussing specifically on immunology. While I am very aware of the role that philosophy has to play in this endeavor, I suspect it will not be a focal point in the work I produce, simply because at this stage it would be a distraction for a biological audience. Nevertheless, I will continue to post about the philosophical roots of my project, because they have a huge impact on the way it is developing.


*I am not an expert in philosophy by any means: these descriptions are necessarily limited.

Photo credit: Michael Kalus via / CC BY-NC-SA


The most persistent internal barrier between me and a mental state of confidence has been uncertainty. Any scientist worth their mettle will tell you that uncertainty is a fact of life, and you have to get comfortable with it. I know this. Nevertheless, I am plagued by periods of uncertainty so severe that it crosses the line into existential angst.

This is fed by the “special snowflake” syndrome so common in my generation. Unreasonable thoughts float around in my mind, serving as justification for the terror and uncertainty: my project is uniquely difficult and therefore harder than anyone else’s; nobody understands my work therefore I am likely to fail. I know these thoughts to be ridiculous. Putting them in writing makes it even more obvious. But there is a grain of truth in them that makes them insidious, and difficult to abolish.

Yes, my work is uniquely hard. So is everybody’s. Doing a PhD is hard, and one’s work must be unique by definition. No, my work is not widely understood. There isn’t an established benchmark that I will be evaluated against, and I need to be creative and dogged in explaining it until it is understood. This is a problem shared by most people doing interdisciplinary work. Yes, it is harder in a methodological sense, and it’s harder to find an understanding audience. But I don’t have to rely on the success of experiments, I don’t have to cross my fingers that a protein will crystallise so I can carry on with the rest of my research, or wait for months for tissue samples from a hospital, like some of my peers. We all face difficulties.


Photo credit: Creativity+ Timothy K Hamilton via / CC BY-NC-ND

Documenting my research process

What I want most out of this blog is to make a splash. Normally this means to get a lot of public attention, but what I want is to get my own attention. Collins thesaurus suggests the synonyms: cause a stir, make an impact, cause a sensation, be ostentatious. What I want is to bring those things into my life and my research by blogging. 

Splash & Good morning

Consider this photograph. That coffee is demanding attention. It’s exciting and beautiful. It’s leaping free of gravity, creating the ephemeral shape of a splash. Of course, we all know it’s going to make a big mess when it falls down. The coffee isn’t frozen in time, only the photograph is. But does that diminish its loveliness?

There’s that saying about the importance of the journey rather than the destination, and I am feeling that more and more. I don’t want to spend my PhD looking only ahead to the finished product. It’s the process that really matters. The importance of producing a good dissertation isn’t in the document itself. Less than a dozen people will read it – and that’s a generous estimate. I could switch to an entirely different field in a PostDoc, and never think twice about the work I’m doing now. It’s like the spilt coffee  – the aftermath of a beautiful splash.

So that’s what I mean when I say “document my research process.”

And that means being braver. Inherent in this process is the fact that my ideas are not fully formed, I don’t have enough information, I don’t understand well enough yet. I will be wrong, often, possibly always. But if I’m too timid to share an idea because it might be wrong, I’ll never get anywhere.



Photo credits

typewriter: mmadden via / CC BY-SA

coffee: dongga BS via / CC BY-NC-ND

Why I enjoyed Medawar’s “Induction and Intuition in Scientific Thought”

Peter Medawar was by all accounts a brilliant and witty man. He won the Nobel Prize in 1960 for his work on graft rejection, which had a huge impact on the field of immunology. His wit and personality come through in this little volume, originally given as a series of lectures in 1968. I found this book to be rich in ideas, and still relevant today. I begin with a summary of the book (with comments), then describe its place in my research. In many cases I will quote him directly, mainly because I enjoy his prose.


Medawar states in the preface that the lectures “began in my mind in the form of a question: why are most scientists completely indifferent to – even contemptuous of – scientific methodology?”(p. vii)*. It wasn’t until this past year that I learned the definition of methodology. It is not, as I had previously thought, a fancy word for methods, or a set of methods. It is the analysis of methods. Of course individual methods are analysed carefully for effectiveness (eg PCR or flow cytometry), but that’s not what Medawar is talking about. He’s referring to “The Scientific Method” and the way we think when we do science.

Chapter 1: The Problem Stated

In the first chapter, Medawar explains the question. What exactly do scientists (specifically biologists) actually do to make scientific discoveries? He argues that most scientists are themselves unable to answer this question. The few who have tried either produce misrepresentations or are not scientists at all but lawyers, historians or sociologists (the notable exception being William Whewell, a biologist, who Medawar refers to repeatedly). Nevertheless, scientific discovery continues! So why bother with scientific methodology at all? He suggests  it would address questions of (1) validation, (2) reducibility and emergence, and (3) causality, which are of interest to all sciences (even the social ones).

Chapter 2: Mainly About Induction

In this chapter Medawar argues that induction, long referred to as the core of the scientific method, simply isn’t. This is not a new or unique argument, and he explains why: “Induction, then, is a scheme or formulary of reasoning which somehow empowers us to pass from statements expressing particular ‘facts’ to general statements which comprehend them. These general statements (or laws or principles) must do more than merely summarize the information contained in the simple and particular statements out of which they were compounded: they must add something … Inductive reasoning is ampliative in nature. … This is all very well, but the point to be made clear is that induction, so conceived, cannot be a logically rigorous process. … No process of reasoning whatsoever can, with logical certainty, enlarge the empirical content of the statements out of which is issues.” (pp. 23-4)

One of Medawar’s problems with induction is that it doesn’t account for the critical use of experiments. He describes four different types of experiments:

  1. Inductive or Baconian, what I would call exploratory experiments, of the type ‘I wonder what would happen if…’. These are not critical experiments. They are meant to “… nourish the senses, to enrich the repertoire of factual information out of which inductions are to be compounded.” (p. 35)
  2. “Deductive or Kantian experiments, in which we examine the consequences of varying the axioms or presuppositions of a scheme of deductive reasoning (‘let’s see what would happen if we take a different view’).” (p. 35)
  3. “Critical or Galilean experiments: actions carried out to test a hypothesis or preconceived opinion by examining the logical consequences of holding it.” (p. 37)
  4. “Demonstrative or Aristotelian experiments, intended to illustrate a preconceived truth and convince people of its validity.” (p. 37)

His argument is that multiple stages of experimentation are necessary in the course of original research, and critical experiments (type 3) are necessary to progress beyond “academic play” (p. 38). The version of the scientific method I was taught in school described only critical experiments – I consider that to be just as serious a misrepresentation of science methodology as only including Inductive or Baconian experiments.

Looking at this list from a modern perspective, I wonder whether we are able to distinguish so clearly between critical or demonstrative experiments. What exactly separates the two? Is it intent? If so, how are we to judge another’s experiments simply by reading a paper in a journal? The conclusion I have drawn after reflecting on this list is that published papers may be a good way of disseminating results, but they are very poor at representing the methodology of science. As Medawar says later in the book, “The critical process in scientific reasoning is not … wholly logical in character, though it can be made to appear so when we look back upon a completed episode of thought.” (p. 53 ). In other words, papers present a logical progression that is seen only in hindsight, and doesn’t reflect the reality of research. Baconian experiments are necessary to embark on any new area of research, but they are rarely published unless something extraordinary is stumbled on.

Medawar goes on to explain the specific shortcomings of induction as a methodology, at the same time highlighting the requirements of a good methodology. This is his summary at the end of the chapter:

  1. “Inductivism confuses, and a sound methodology must distinguish the process of discovery and of justification.
  2. The evidence of the senses does not enjoy a necessary or primitive authenticity. The idea, central to inductive theory, that scientific knowledge grows out of simple unbiased statements reporting the evidence of the senses is one that  cannot be sustained.
  3. A sound methodology must provide an adequate theory of special incentive – a reason for making one observation rather than another, a restrictive clause that limits observations to something smaller than the universe of observables.
  4. Too much can be made of matters of validation. Scientific research is not a clamor of affirmation and denial. Theories and hypotheses are modified more often than they are discredited. A realistic methodology must be one that allows for repair as readily as refutation.
  5. A good methodology must, unlike inductivism, provide an adequate theory of origin and prevalence of error…
  6. … and it must also make room for luck.
  7. Due weight must be given to experimentation as a critical procedure rather than as a device for generating information; to experimentation as a method of discriminating between possibilities.”(pp. 40-41)

Chapter 3: Mainly About Intuition

In the final chapter, Medawar makes a case for a “hypothetico-deductive” scheme of science methodology, originating from many thinkers including Kant, Robert Hooke, Stephen Hales and Robert Boscovich, and advocated in Medawar’s time by Karl Popper. He details how each of the 7 requirements for a better methodology are met by this model, but I won’t get into that here. His focus, and I think the more interesting aspect of the chapter, is on the role of intuition or creativity in this model. “Scientific reasoning is an exploratory dialogue that can always be resolved into two voices or two episodes of thought, imaginative and critical, which alternate and interact.” (p. 46).

Again, he describes four types of creativity (not ruling out the existence of more): deductive, inductive, wit, and experimental flair. The details are less important than the conclusions he draws from their existence: “… an imaginative or inspirational process enters into all scientific reasoning at every level.” (p. 55). “That ‘creativity’ is beyond analysis is a romantic illusion we must now outgrow. It cannot be learned perhaps, but it can certainly be encouraged and abetted. We can put ourselves in the way of having ideas, by reading and discussion and by acquiring the habit of reflection, guided by the familiar principle that we are not likely to find answers to questions not yet formulated in the mind.” (p. 57).


Some thoughts

I loved reading this book. Every section seemed to cut right to the heart of an issue and reveal it starkly, I think in large part to Medawar’s lovely style of writing.  The questions he identifies as important across scientific disciplines (validity, reduction and emergence, and causality) remain relevant today, and I have a lot of personal interest in them. Beyond that, the overall premise of the book is closely related to my research. A large part of my work at the moment is focused on second order science. As defined on the website: “First-Order Science is the science of exploring the world. Second-Order Science is the science of reflecting on these explorations.” It seems to me that Medawar is doing exactly that when he talks about science methodology.

Throughout the text Medawar advocates self-reflection, or reflexivity, in science. Unfortunately, most scientists remain as unconcerned with such things today as they were 50 years ago – that’s the domain of philosophers of science – despite the potential implications. What would be the benefit of engaging with science methodology in the way Medawar did? He says: “Most scientists receive no tuition in scientific method, but those who have been instructed perform no better as scientists than those who have not.” (p. 8). Could we change that state of affairs? Could we teach ourselves to be better scientists if only we could describe what we are doing?

In the past year I have done a lot of reading about systems theory, with a bit of complexity science and cybernetics thrown in there. A key part of General Systems Theory, as originally defined by Ludwig von Bertalanffy, is the idea that there are patterns and general rules that can be found and applied in systems across all disciplines. In that sense, it is a transdisciplinary theory. Cybernetics, as defined by Norbert Wiener, is the study of communication and control, now referred to as the study of regulatory systems. It is also transdisciplinary, and closely related to systems thinking in intellectual lineage. Given that background context, I was very excited to read the following passage (pp. 54-55):

“There is nothing distinctively scientific about the hypothetico-deductive process. It is not even distinctively intellectual. It is merely a scientific context for a much more general stratagem that underlies almost all regulative processes or processes of continuous control, namely feedback, the control of performance by the consequences of the act performed. … scientific behavior can be classified as appropriately under cybernetics as under logic.”

This observation, combined with his repeated suggestion of reflexivity, shows that Medawar was thinking in terms of second-order science. This is a lovely example of synchronicity, or the same idea occurring separately in many places at once, because a group of cyberneticians described second-order cybernetics shortly after.

The more I think about what Medawar wrote, the more I link it to the rest of my research. I will certainly be referring to Induction and Intuition in Scientific Thought in future posts, and I highly recommend it if you have any interest in the way scientists think.


*All page number citations refer to:

Medawar, P. B. 1969. Induction and Intuition in Scientific Thought. Vol. 75. Memoirs of the American Philosophical Society. Trowbridge & London: Redwood Press Limited.


Photo credit: DiariVeu – via / CC BY-NC-SA

Niche: identified

The most recent Thesis Whisperer post featured a list of unsolicited advice received by a newly minted PhD student before starting her degree. At this point, a year and a month into my PhD, the piece of advice I found most salient is “find a niche and follow your passion.” That about sums it up.

Reflecting on this past year, I can certainly say I have followed my passion. I chose to develop my own project from scratch (with the enthusiastic support of my supervisors) guided by nothing but passion. Of course there were moments (or stages, even eras) when I lost confidence, but I stuck with it and passion carried me through. But it wasn’t enough to convince anyone – from the very inception of my project I’ve been met with confusion (and hesitation) by most everyone except my supervisors. Of course, that makes perfect sense when you factor in the total lack of specificity: my passion is very general. To the point of being kind of useless on its own. And that’s what the niche is for!

I have been doing “literary spelunking” for over a year now. I read broadly, and deeply, and learned a lot of interesting things. But at some point even I began to despair that I would never be able to narrow down – cue some existential anxst. It took patience and some hard thinking, but eventually I realised that there were certain ideas that just kept popping up – and not only that, but they were actually related. When I started thinking about those relationships, I started to define my niche. Today, in a very animated meeting with my supervisors, we drew it out in a venn diagram. And there it was, on the whiteboard. Finally.

The point of a niche is not to be restricted to it – I haven’t lost general passion because I’ve now identified a niche. But I’m grounded in it. My passion is guided and made useful by the boundaries it provides. It’s my intellectual home base. And, for bonus points, I actually get to build it! Maybe one day it will expand into a full-fledged city.


Photo credit: madelyn * persisting stars via / CC BY

Multi- Inter- Trans- Post-disciplinary

“Interdisciplinary” is a trendy term at the moment. There have been increased calls for interdisciplinary research to solve complex global problems, and the journal Nature devoted a special issue to the subject last year (note the superheroes above). It’s common sense that difficult problems have many facets, and a diverse grouping of specialists, each with their own perspectives and tools, would be better equipped to tackle such problems than any single discipline alone. While I agree wholeheartedly with this conclusion, there is more to be said.

To the dictionary!

First, let’s unpick the differences between multi-, inter-, trans-, and post- disciplinarity, because these words get bandied about a lot and I have found them confusing. I have been sitting in on a skunkworks group of both science- and humanities-oriented academics who meet periodically to explore new ideas and discuss interdisciplinary collaboration, and complexity. The official definitions of these terms are unclear and inconsistently used, so I will report what the skunkworks have concluded.

In any collaborative group, each person has the opportunity to contribute to and receive from the rest of the group. The level of conceptual feedback loops and the outcome of the collaboration determine the label we assign.

Multidisciplinary interaction can be understood as having no feedback. Each individual contributes their specialist knowledge, tool or method to a problem, but takes nothing back from the group interaction. For example, a large-scale epidemiological study might enlist the help of a statistician and a bioinformatician without those individuals absorbing the perspectives of epidemiology.

Interdisciplinary collaboration happens when there are feedback loops, when individuals take back insights from other disciplines into their individual disciplinary research, or work together on a project that overlaps multiple disciplines. When that work crosses the fuzzy boundary between overlapping and meshing, you end up with new disciplines like mathematical biology, with their own established sets of tools, techniques and concepts.

I think transdisciplinary is the hardest term to understand, unless you are familiar with the concept of emergence as a property of complex systems. When the feedback and collaboration between individuals and the group leads to the creation of something totally new, that is transdisciplinary – an emergent property of interdisciplinary collaboration. I don’t have an example of this, but if you do please chime in. I suspect that working with complex systems in a way that truly engages with and embraces complexity rather than trying to simplify it could lead to transdisciplinarity. Note that there should be no value judgement assigned to this outcome. Academics are practically hardwired to think that novelty is inherently good because it is a requisite for publication and prestige. However, to say that something is different does not mean it is good or bad, better or worse.

Postdisciplinary has more to do with the institutional structures of academia, specifically the organisational division of disciplines. Some argue that the problems we seek to address are not disciplinary in nature. Treating cancer, for example, is not just a medical problem. Availability of health care, economic situation, lifestyle choices linked to behaviour and psychology, and development of novel treatments and diagnostic tools all play a role in how an individual experiences and is treated for cancer. Those in favour of a postdisciplinary arrangement argue that by organising institutions around themes or specific problems, abandoning disciplinary boundaries, would be a more effective method of tackling difficult issues.

Now that I’ve put forward these definitions, I want to challenge their usefulness. Does labelling these different types of interaction serve a purpose? I suggest that the value comes from thinking about the different ways people can interact and the types of outcomes that arise from those interactions – the names are ultimately unimportant. I will continue to use the word “interdisciplinary” as a catch-all that includes the research of individuals (like myself) who are working across disciplinary boundaries.

Some challenges are barriers, some are opportunities

A few weeks ago I attended the symposium Exploring the Frontiers of Interdisciplinarity, where speakers discussed the challenges and benefits of interdisciplinary research. Several difficulties arise as a consequence of the institutional structures of academia. For PhD students, the need to identify with a single department inherently inhibits interdisciplinary research. Universities are modular, split by discipline into self-contained and separately administered units that struggle to coordinate with each other. For a PhD student, there is also the question of legitimacy. If one is doing something new, there is no benchmark against which to measure output. Finding examiners with the necessary background knowledge to evaluate your work is difficult. This problem extends beyond the PhD to every stage of the research career when one considers the difficulty of publishing interdisciplinary research. Finding a relevant journal, willing reviewers, an interested audience are challenging because the current system is so tightly structured around distinct fields of study.

The most memorable theme from the symposium, which resonated strongly with my own experience, was how frequently the benefits of interdisciplinarity emerge from tackling the problems head-on. Many challenges have to do with interpersonal relations during collaboration. One of the main problems is communication, especially the use of specialised jargon. Certain words are simply unknown to anyone outside a specific discipline – for example specific cells or molecules that are familiar to an immunologist may not be known to a molecular biologist, much less a historian or sociologist. In this case, those unfamiliar with the words are aware that they don’t understand the meaning. The more confusing situations arise, however, when words that are used in everyday language take on additional specific meaning in a disciplinary context. For example, the word “significant” means something very specific in a scientific context, reflecting the outcome of a statistical analysis of data, however the word is commonly used as a synonym for “noteworthy” or simply “having meaning”. If one is unaware of the specific usage of a familiar word, it may not even be recognised as jargon.

One way to approach communication in an interdisciplinary setting is to intentionally avoid jargon, or to use it sparingly and explain it clearly, thereby building a shared vocabulary over time. Another strategy, and one that I have observed to be very powerful and was discussed with enthusiasm at the symposium, is the creation of new and shared metaphors. The use of metaphor is incredibly powerful. Choosing a focal point that is removed from each individual’s specialty but familiar enough to be played with encourages the group to create a shared understanding and common language. The resulting metaphors can also influence the way researchers view a problem or theory, and can be taken back to their individual work.

The process of discussion and exposure to the perspectives and modes of thought of other disciplines has another excellent effect on the individual. It highlights one’s own thought patterns, assumptions, and perspectives. It can be incredibly useful to analyse and question the “rules” of your discipline, as it can lead us to ask new questions and form new ideas.

Similarly, the combined perspectives of multiple disciplines can be very effective in thinking about specific problems in a new light, or creating new projects. That is becoming more important as we face “wicked” problems – those which are difficult or impossible to resolve due to interconnectedness, lack of knowledge or contradictory information, and the number of actors involved. Recognising that these problems arise out of complex systems means we need complex approaches to solving them – and interdisciplinary research works towards that, because different perspective provide different affordances. Affordances are “the properties of the world that we perceive that enable us to control our actions” (source). For example, when we perceive a handle on a mug it suggests to us that we can pick it up. A doorknob on a door can be turned, and the door can either be pushed or pulled. A panel on a door shows that the door can be pushed. Because specialists are trained to view objects and problems in a very specific way, they each perceive different affordances, and therefore different available actions. The very nature of a wicked problem suggests that we may never fully understand it, but the more perspectives we apply to it and the more affordances we reveal, the better we’re able to take informed action.

If you look at the superheroes on the cover of Nature, you’ll notice that they are all representing the sciences. The symposium I attended was focused on the humanities – my supervisor and I were the only scientists there. Projects that bridge the gap between the humanities and the sciences are rare, and I think this is an unfortunate testament to how entrenched our disciplinary perspectives are. There is enormous untapped potential in these collaborations. I hope that in the future, as interdisciplinary research becomes more common and widely appreciated, we can forge new connections between the sciences and the humanities. And, as a final thought, I think the nature of the wicked problem also dictates a need to reach outside of academia, not just for outreach or public engagement, but for active collaboration with individuals, groups, institutions and populations.

If you are considering interdisciplinary research or collaboration I hope that these thoughts are of use, and hopefully encouraging. Despite the struggles there is great value that comes out of it. For those of you already on the interdisciplinary bandwagon, how have you approached it, and what have you gained?

Chickens, eggs, and mutual causality

Which came first, the chicken or the egg? The way this question is phrased implies a linear causality: one exists first, and then gives rise to the other (A → B). We have a tendency to see things this way. We look for root causes, explanations that can be traced back to one person, event, bacteria or machine. If we can find that cause, we can solve problems and make sense of the world. This strategy has worked well for us – we use antibiotics to kill bacteria, we stopped using the chemicals that were thinning the ozone layer, we learn in school that the assassination of Archduke Frans Ferdinand triggered WWI and that the invention of the steam engine led to the industrial revolution.

But we struggle with the chicken and egg problem. Which came first? Neither. Our difficulty in answering the question betrays the game – there is no linear relationship here. It’s more like A ↔ B. The chicken and egg reproductive process is the result of a long evolutionary history, each iteration influenced by the environment and the one that came before. If we take small steps back in time, the difference between each iteration is so small as to be indistinguishable. But take a leap back, and you perceive a difference. At what point in that history of development can we draw a line, when either side of that line will always look basically the same?

We don’t like this kind of problem. It’s harder to think about and it’s harder to solve. If you need experience to get a job, and you need a job to get experience, what should you do? If you need enzymes to make proteins, and enzymes are proteins, how did the first enzyme get made? What about the origin of life?

We have struggled with these questions for centuries. Creation myths and religious explanations are part of every culture, but even they leave us wanting more. My grandfather was a pastor, and he believed that the world was created by God. I asked him once where God came from, and he told me a story.

A pastor was walking on the beach one day, asking himself who or what had created God. He encountered a boy, running back and forth between the ocean and a hole in the sand. The boy was carrying a pail, and filling the hole with water from the sea. “What are you doing?” the pastor asked. “I’m putting the ocean into this hole,” the boy replied. “But the ocean is much too big to fit in that little hole!” he said. And the pastor realised that questioning the origin of God was much the same – too big a question to fit in a human mind.

This is the problem with linear causality. If we search for the one start, cause, origin – there is always something that came first, some earlier start, cause, origin.

Another way of looking at relationships is to see mutual or circular causality. In systems theory this relationship is described using feedback loops. In Buddhism, it’s called Pratītyasamutpāda, or dependent co-arising. Joanna Macy describes the concept well in her book Mutual CausalityMy intent here is not to describe mutual causality but to discuss the potential impact of seeing relationships in this way. Using the chicken and egg example, we can say that the chicken and egg came into existence together, co-dependently, each creating the other iteratively over time. We see the relationship between them in a new light, and can then start to ask different questions. To me, this is the core of complex systems theory.

In immunology, the field my research project focuses on, one of the aims of research is to contribute to disease prevention and treatment. Many of the most widespread treatments currently rely on linear causality to identify treatment targets: antibiotics, antivirals, immunosuppressants. But what we experience as disease isn’t just caused by a pathogen in a linear way, it arises from the interaction of our bodies with the pathogen. The same bacteria could make you sick or not depending on where it enters your body, in what quantity, at what time in your life, and with what other microbiota. Whether you experience symptoms of sickness depends on how your body, in the condition of that moment, interacts with that pathogen, as it appears in that moment.

It may seem like adopting this perspective would make it impossible to find solutions, but it actually presents more opportunities for interventions and treatments. Vaccines are the most prevalent example of medical intervention that takes a systemic approach. A vaccine relies on the inherent structure of the immune system and an immune response to perturb the system to a different, desired state. When we get an infection and then experience sickness, the pathogen is perturbing the body (a complex system) in such a way that it experiences a state change, from healthy to ill. A vaccine perturbs the system to a state of immunity from that particular pathogen. There are many potential ways to perturb a system, and those can be exploited for novel treatment pathways. One example is the addition of bacteria to treat unwanted bacterial infections or chronic inflammation (you may have heard of fecal implants being used to treat IBS).

Treatments are certainly getting more creative, and I don’t mean to imply that nobody is finding novel solutions or using systemic methods. However, I think much of the research that goes on (eg finding drug targets) is still based in a conceptual framework of linear causality. The complexity and interconnectedness of biological systems is becoming more apparent as we gather more information, but widespread adoption of a systems perspective has not yet occurred. Innovations like bacteriotherapy arise in part from urgent concerns like increasing antibiotic resistance, and the pressing need to find alternative solutions. I argue that such innovation would be more easily attained if our conceptual frameworks included mutual causality. A post from the blog Emergent Cognition explains it well: “Although one perspective isn’t inherently better or worse than another, each reflects a way of seeing with unique affordances and constraints on how we think.” We can only gain from broadening our perspectives to include circular causality and complex systems thinking.


Photo credit: Infomastern via / CC BY-SA