Science: The Future

Monday, May 30, 2005

The Changing Paradigms of Science

Dr Michael Fuller is a theoretical biologist and Visiting Fellow at the Institut du Cenospecies in Paris, France. He has an extensive publication record; his most recently published peer-reviewed papers include Investigating Variations in Pattern Heritability (in Reproduction and Inheritability 12.6.04, pp 1456–1463) and Can Pattern Mathematics Provide Insights into the Origin of Life? (Journal of Investigative Life Sciences 8.4.03). In this extract from his forthcoming book for the layman audience Dr Fuller introduces and expands on some of the ideas contained in the latter paper.

It seems to have become fashionable of late to criticize science and to act as though science (and, by extension scientists) were hostile, inhuman—to behave as though many of the wonderful luxuries we enjoy in the modern world were not thanks to the rigorous discipline of science. The legacy that the nineteenth and twentieth century have left us and future generations is a magnificent one, and should not be denied; what we should be doing now is not criticizing that heritage but asking how science in the twenty-first century can offer insights and discoveries as profound.

Before we go into this further, though, it may be worth making clear exactly what science is, and at least as importantly what it isn’t. From media representations of science you may have got the impression that science is some kind of monolith, a mass of closed ranks (if that isn’t mixing metaphors). That’s one thing science isn’t! Science is, if anything a process and a debate; the process is the investigative methodology by which we progress. First, we look at the facts, which is to say at the world around us; we then look for a hypothesis (what in common language we would call a ‘thesis’ or an ‘idea’) which might explain those facts. In science we look for hypotheses which can in some way be tested, although importantly the kinds of testing we can apply vary with the hypothesis itself—of which more later. Anyway, this requirement of testability leads to the next step, which is the testing itself. At this point, if the data produced or found fails to fit the hypothesis then we must conclude that the hypothesis is wrong and either needs adjustment or needs to be thrown out completely. Only once a hypothesis has been tested again and again is it what scientists call a theory. And really, on the surface that’s all there is to it: science is a simple system for trying to test all ideas and actually show them to be true or false. The complications come where this ideal collides with the real world.

One of the interesting things about science is the degree to which it relies on approximations. From the very earliest times people have tried to work with the world in terms of what we scientists call models, which basically means descriptions or simplifications of real events which are too complex to study (and, often, too complex to really understand—as with, say, economics, where there is just so much going on that you can’t possibly think everything through). Scientific theories, too, are developed, described, and tested using models. There’s nothing wrong with this, exactly, and in fact we all work with models all the time; the impressions we have of other people are basically models, since we can’t know anyone else perfectly. The trouble is that while we scientists should remember that even the best theory is a simplification—a model, no matter how complicated and ‘realistic’ a model it may be—it is easy to forget, just as it is easy to believe that we ‘really know’ those who are close to each other. Of course, a lot of the time our models are very, very good; the scientific descriptions of atoms, DNA, and so on are superb and work very well, just as my ‘model’ of my wife is very good. However, they are just models—just theories; my wife can still say and do things that surprise me, and though such things until now have been small there’s always a possibility that she will do something that really, really comes as a surprise; something, in short, which might remind me that despite 23 years of marriage I am still really working with a model, albeit one I have subjected to years of ‘testing’.

This point with regard to models is critical to all of science and applies to everything. That’s where scientific controversies arise; a model is developed to fit available facts, then as time passes it is adjusted again and again to fit the data (usually because measurement techniques reveal inadequacies in the original, simple model). To come back to the metaphor (actually, another ‘model’) of marriage, we would say a couple grow to know each other better and better over the years. Actually, what they are doing is adapting and tweaking their models of each other as they learn increasingly detailed data about one another. That model is never really an exact representation of the other person, but it’s usually so good that we never need to worry about the inaccuracies and our tweaking of the model really does bring it closer to the other person (which is why doing that is such a good rule of thumb—it’s actually an application of Ockam’s razor). Sometimes, however, our models lack some critical piece of data which can make a huge difference—and when this is revealed, just like scientists confronting some new fact, we need to make a choice of whether or not to abandon the model. As an example, you occasionally read of wives whose husbands have suddenly been found to be murderers, or rapists, or something equally bad, and yet who elect to remain with their husbands, visiting them in jail and so on despite knowing this about them. What has actually happened is that they have discovered a fact which, properly, should render their model invalid, but they are so sure of its rightness that they simply attempt to tweak it once more, even though there is no simple tweak that can account for murder.

Right now, ‘science’—or rather, we scientists—are at something of a similar crisis point, and it all starts with patterns. We’re all familiar with the feeling that there is a pattern to our life and for looking for patterns in random occurrences; at its extreme this can evince itself in seeing images of the Virgin Mary in salt stains or in mineral buildup on a windowpane, but it is a recurring theme throughout life. We are incredibly well-suited to spotting patterns, and no-one really knows why. There are hypotheses, of course; one seductive and highly plausible Darwinist explanation is that an ability to recognize patterns can increase one’s ability to spot predators, make predictions and so on, and that our ability to, say, see faces in clouds is simply the mind using the mechanisms evolved for this more practical purpose. This hypothesis may well hold some value as far as visual pattern-recognition acuity goes (and though I will not go into the data here I will point out that the evidence for biological evolution is overwhelming) but it does not explain why it is that pattern forming is quite so integral to our evolved intellectual capabilities—pattern forming which has enabled us to develop philosophy, religion, and science, among other things. Why we have this ability to recognize patterns may remain a mystery, but have it we do: and in recent years it has enabled use to discover something truly fascinating.

In this article, based on an extract from my forthcoming book, I will only be able to cover this in very slight detail; in the book I cover a great deal more, not least because it includes the findings of my own research team’s work into unconventional myosins and into pattern mathematics (previously published in more technical forms in the peer-reviewed Close Analyses of Gene Data and the Journal of Investigative Life Sciences respectively). Unfortunately, most of this work is highly technical and would be almost impossible to cover in the space allotted here; if you are interested in learning more I suggest you wait for the publication of my book in August. However, we can cover some of the most important discoveries without going into the technical data of gene analysis; in fact, much of the most important data has been known to the scientific community for years, even decades—it is simply that many scientists (though not science, which, as we have seen, is a methodology, not an ideology) are as yet unwilling to face the implications of what they are seeing.

The patterns which concern us here—though there are others, both known and waiting to be discovered—interest because they are ‘inexplicably’ scale invariant; these have to do with something called power law behaviour. Although when dealing with the depth of data covered in my own papers this can get quite difficult, the principles are incredibly simple. When one quantity (say y) depends on another (say x) raised to some power, we say that y is described by a power law. For example, the distance travelled by an object in free fall (in the absence of air resistance) is given by s(t) = 1/2 gt2. Ignoring the dependence on g, this equation says that s is proportional to the square of t. Plotting on linear (Cartesian) axes produces a parabola. Now, a parabola is a very beautiful curve, but it is difficult to eyeball a curve and say with confidence that it is a parabola. It is even harder to eyeball some data points, with their uncertainties, and tell whether they follow a parabolic curve. Eyeballs are much happier with straight lines, so it’s good that there is a handy way to convert this power-law relationship into a straight line. Take the logarithm of both sides of the equation—log(s) = 2 log(t) + log(1/2 g)— then plot log(s) on the y axis and log(t) on the x axis: we get a straight line with slope 2 (which is the exponent of t). Ok—that’s actually the difficult part here.

The thing is, there’s nothing surprising about finding that an object in freefall observes a power law—in fact, it’s obvious (by which I mean ‘obvious given the law of gravity) that it should. But there are some less obvious places where power laws are found, and overall these build up to add some very interesting ideas to our understanding of what science can teach us. Perhaps the place to start looking at this is with patterns arising in living systems (in fact, this is where my colleagues and I have been looking). Again, we don’t have space to go into this in a great deal of detail (though if you are interested you can buy my book when it comes out); however, we can run through a few examples.

In terms of life, a fundamental quantity that determines the way we sustain ourselves is our metabolic rate—the amount of energy we require per second to stay alive. That's called the basal metabolic rate. Many years ago, Max Kleiber plotted the basal metabolic rate vs. mass for various birds and mammals on a log-log plot. Seeming simplicity emerges in the form of a straight line, indicating a power law—a pattern, and one which echoes that we looked at earlier in terms of an object in freefall. This means that the metabolic rate varies as mass raised to the ¾ (the slope of the curve is ¾ on the log-log plot). In other words, a three orders of magnitude change in metabolic rate plot would mean four orders of change in mass. Notice that man is a little less than 100 watts in metabolism (a light bulb)—that's about 2000 calories a day. All this is old news, having been discovered in the 30s, but more recently researchers have taken this further and looked at scales below the original mammalian and bird data to plot cold-blooded organisms and unicellular organisms. It was discovered that the same ¾ power found for mammals also governed in these other taxa. All metabolic rates scale at a power of very close to ¾. It doesn’t stop there, though: there are maybe 150–200 such scaling laws between physiological variables, and it is thus truly extraordinary that the exponent in the power law is invariably a simple multiple of ¼. The number "4" play a fundamental role in the way life is structured: four is the magic number of the universe. Let's look at some other scaling phenomenon to observe this. Take the radius of the aorta plotted against the size of mammals. The slope is very close to 3/8. If you squared the radius to get a cross-section, you would end up with a slope of ¾. Tantalizingly, a tree trunk scales the same way—radius against mass. Heart rate decreases as mass to the ¼. How does lifespan change with size? It varies as mass to the ¼. The heart rate decreases as mass to the ¼, but the lifespan increases as mass to the ¼.

Of course, we might again expect to find something like this—a consistent pattern across all life (and there are many other biological systems that exhibit this power law behaviour). It happens that it is only in the systems where this power law behaviour is found that there is sufficient stability for life to be sustainable, but although superficially at least the probability of such a system evolving by chance seems small, any event viewed as the end of a chain of events is bound to be spectacularly improbable. When I first discovered this recurrence of pattern, as a scientist my own impulse was to discount any kind of external reason for this: one certainly does not—as some have suggested—require a designer for this kind of order.

However, the entire universe exhibits a tendency to exhibit these kinds of ‘signature’ patterns—patterns which, for no readily discernable naturalistic reasons, are repeated not just within the evolved physical systems of life (where a systemic naturalistic explanation is possible) but across the observable cosmos. The frequency and severity of earthquakes, for example, obeys the same kind of scaling law, something which helps explain why predicting earthquakes is so hard yet which has gone unexplained itself. Meteor impacts, too, obey this pattern; so do extinctions, as evidenced by the fossil record, and so does climate change. Scientists are aware of these patterns and of the lack of naturalistic explanation for this consistency: why is it that the relationship between the number of DNA and the number of types of cell in a living body exhibit the same patterns as the changes in the brightness of light from a quasar? As Dr. West has observed, ‘If we understood the origins… of this scaling, we'd uncover something very profound.’

Dr West, unsurprisingly, has not publicly endorsed the idea that the consistent, universal recurrence of these patterns may not be amenable to naturalistic explanation. This is not itself an erroneous route to take. Science, to date, has been built on an atheistic position—not in the sense that scientists are necessarily atheists themselves, and indeed many scientists have been profoundly religious men, but in the sense that the discipline itself has advanced by the rejection of supernatural explanations in favour of naturalistic causation. In doing so it has brought society far, and I still believe that science should be prejudiced in favour of naturalistic explanations insofar as these are available. The question is: at what point does one draw the line and say that a purely naturalistic hypothesis is no longer credible, or is even impossible?

This is no simple question, but I believe an answer does exist and that this answer lies in pattern recurrence. Similar behaviour can be identified in related or connected systems as a result of simple naturalistic processes; when we find similar or identical patterns within a system we can suppose that a naturalistic explanation may exist (even though parts of the chain of causation may be lost so that we cannot see the source of the similarity). In this situation it is consistent with reason and the scientific process to make the presumption in favour of the naturalistic, and indeed in innumerable instances this had proven to lead to great developments (where would we be without, say, Pasteur’s conviction that disease has a naturalistic cause?). Further, when a statistically insignificant number of definitively unrelated systems show similar behaviour it is not unreasonable to make allowance for coincidence or for the similarity to be due to a quirk in data processing.

However, these possibilities only hold true when the numbers of unrelated systems are limited—very, very limited, in fact, since given the wealth of data available to us and the accuracy of measurement we now possess statistical significance sets in early. Recurrence of a consistent pattern across a wide range of systems unrelated by any factor save that they lie within the observable universe suggests something more. Such a ‘universal signature’ requires a universal cause, and there is no naturalistic hypothesis that can account for such a phenomenon. In short, it suggests the existence of a designer.

Some scientists have sought evidence for a designer in ‘irreducible complexity’, but to my mind this is mistaken—complexity can be explained naturalistically, and thus does not constitute evidence per se for a designer. The same cannot confidently be asserted of pattern data; certainly, all the observable phenomena exhibiting this replicated power law behaviour are, individually, explicable—even if only hypothetically so in some instances—but they all exhibit the same pattern of behaviour and cannot share the same naturalistic explanation. As such, I believe that the observed recurrence of these patterns meets the criteria for objectively observed evidence of the existence of a designer.

Here, perhaps, lies the answer to my initial question about what contribution lies ahead of science in the twenty-first century. It is an insight that should stimulate scientists everywhere, and I have no doubt that as the paradigm shifts scientists in all fields will find their own ways to contribute to this great project. For now, however, many scientists are unwilling to accept that the implication of this growing body of pattern data may mean that simply tweaking the model is just not enough. As such, perhaps these scientists, though all themselves men and women of impeccable integrity, are like the woman who remains wedded to the psychopath, the husband who stays with the harlot.