Two potential revolutions in medicine

In the past few months, I have read about two ideas that may revolutionize medicine in the coming decade. They both have to do with data and data analysis, on a large scale.

The first one goes by the name of “Watson”. That is the name of an IBM computer system which got famous recently for beating human contestants on the game show “Jeopardy”.

http://www-03.ibm.com/innovation/us/watson/index.html

The most impressive thing about “Watson” is that it can make sense of information that is written in plain language, with all the ambiguities that this entails. In order to be successful on Jeopardy, it had to analyze a vast treasure of plain text information, and then be able to understand correctly the game’s questions and reply to them by scouring all its information.

Now here is where medicine comes in: While winning Jeopardy was a great public relations success for IBM, in the future they want to turn Watson into a kind of super-powerful omniscient medical assistant, among other uses. Watson will digest the medical literature and then be able to come up with diagnoses once you present it with symptoms.

I was listening to a talk about Watson by the boss of that IBM unit recently (at the American Physical Society March Meeting 2012 in Boston). He recounted the story of a patient who had been saved only at the last minute because the patient’s symptoms had puzzled doctors and actually were due to a very rare condition that only one (human) expert finally interpreted correctly. Upon feeding these very same symptoms in to Watson, the computer came up with a list of possible causes, and the correct guess was among the top few options that Watson listed. It is no wonder that the guys at IBM are very enthusiastic about the prospects.

People have tried to build “expert” databases in the past, by carefully entering information into a computer system. However, the total amount of unprocessed plain text information out there (e.g. in the medical literature) is vastly greater. Therefore, a system like Watson can potentially become a very powerful assistant. Probably no human expert could have the same overview over the whole literature. Of course, a human medical doctor will still have to look at the options that Watson presents, for a sanity check, and to order relevant follow-up tests on the patient.

Here is the second development that is potentially revolutionary. One serious shortcoming of medical diagnosis is the lack of quantitative data. By this, I mean that only when you go to the doctor because you are feeling ill they will start to analyze your blood sample. However, the resulting values will be informative only if they fall drastically outside of some “standard interval”. In other words, you probably have to be already very sick. It would be much better if one had available some standard values for you, and not only for the standard statistical average for the “typical” patient. Better yet, it would be great to have time series information that shows how the values are slowly changing over time. Of course, there will be harmless fluctuations and drifts, but you might spot dangerous trends (or sudden changes) very early. If concentration X had been hovering around some value for a long time with only minor fluctuations, and then one day suddenly begun to slowly drift upward, you may catch this trend and reply to it long before it really gets dangerous.

A first step into this direction has been made recently. Read more about it in this New York Times article:

http://www.nytimes.com/2012/06/03/business/geneticists-research-finds-his-own-diabetes.html

The geneticist Michael Snyder, who directed the study, was also its subject. At first, they did a genetic analysis for him. Then, every two months or so extensive blood tests were performed. Sometime during the study, he developed diabetes. Due to the good time-resolution and the extensive tracking of many different proteins, the research was even able to tell that the illness likely started when he caught a cold. A connection of this type had not been observed before. Since the diabetes was detected early on, treatment could also start at an early stage, reducing the harm.

At the moment, of course, this approach is still much too expensive for the general public. Each blood sample analysis costed about $2500. However, in the future the costs will likely be reduced, and one may also focus on only a smaller subset of the 40,000 molecules that were tracked in this study.

If regular blood tests were combined with the capabilities of an expert computer system like Watson, it is very likely we can detect problems much earlier than now. Of course, this raises once again the question what to do when diagnosis outperforms treatment options, and that will likely (apart from costs) be one of the most hotly debated issues in the years to come.

Advertisements

The future, back then

I am just reading a book about the future. The book is from 1910.

This book was a collection of essays by prominent Germans (of 1910), and it was put together by the journalist Arthur Brehmer. It is called “The world in 100 years” (“Die Welt in 100 Jahren”), and it was recently republished and became the German “Science Book of the Year” 2010.

For all the German readers, this book is highly recommended! It is fascinating on many different levels, both for the predictions it gets right and those which in retrospect sound naive or even highly problematic. In addition, it contains many marvelous illustrations.

The most striking example of what is predicted correctly is the cell-phone. In 1910, wireless communication had just been invented, and it was clear that this would revolutionize many things. In the book, you can read about how they imagined people running around with a small phone in their pocket and being able to call (and maybe even see) their friends across the world.

When they get things wrong, it is often in such a way that even back then one could have identified the problematic assumptions. For example, it is usually very dangerous to extrapolate naively, especially with regard to things that are currently en vogue. Back then, radioactive elements, especially radium, had just been discovered. For the first few years they were seen as a kind of miracle substance, and several authors in the book speculate about all illnesses being conquered in the future by the use of radium.

One can also spot some highly dangerous trends of thinking that would later contribute to disastrous developments.  In several contributions, eugenics is touted as a progressive idea. More generally, there is often the assumption that the state will be able to make sweeping and arbitrary changes to people’s lives, with the majority of the population apparently regarded as nameless masses. As long as it fits the “greater good”, this is considered beneficial without questioning the impact on the individual.

Then there is the typical fallacy of futurist planning: The assumption that cities (or countries etc.) can and will be remade from scratch. Even in 1910 the authors could just have looked back over the centuries and seen that cities rather grow organically and seldom are redone entirely according to some grand new plan. The few exceptions to this rule, artificial cities that were erected in the 20th century, usually are relatively bleak places where people wouldn’t move to unless they have to.

However, one has to say that the 23 authors vary greatly in their opinions, assumptions, ideology, intelligence, wisdom, and consequently in their predictions. One of the best essays (in may mind) is the one by Bertha von Suttner, the peace activist. She describes some of the institutions and ideas that are now established to preserve peace, of course with very incomplete success.

Maybe I will pick some of the essays from the book and comment on them in some more detail in some future blog posts.

Interference

Interference of waves is one of the most important phenomena in nature.

Even two circular waves spreading outward already show quite an intricate pattern. Witness the blue stripes going diagonally across the picture (e.g. in the upper left part). These are the places where the waves cancel each other and the water surface would not move.

Here are a only a few of the many phenomena that involve interference of waves:

  • Rainbow-colored appearance of thin soap or oil films or the appearance of some butterflies or insects
  • Fluctuating strength of a radio signal being scattered from trees, buildings etc., depending on where you try to receive it
  • Highly precise distance measurements, e.g. to detect movements of the earth

Best history of science & technology TV series ever (“Connections” by James Burke)

Only a few times in each century, in every field of human endeavour,
there will be some work that is so outstanding it defies comparison.

In explaining the history of science and technology to the public, this
kind of singular event is epitomized by the famous “Connections”, a 1978 BBC TV series by the science historian James Burke.

The basic idea of that series is to show how a chain
of inventions throughout the ages is interconnected to
produce some essential aspect of modern-day life. And one
of the main messages, presumably, is that you could never ever
have predicted these sometimes weird connections that led
to the astonishing technological and scientific progress that we
sometimes take for granted.

“Connections” can be found on YouTube in its entirety
(though this is of course not entirely legal, and the
image quality is just the typical YouTube quality, but
never mind).

You can find links to the ten “Connections” episodes at the following
link (if that does not work for you, see below):

Connections playlist on YouTube

Note that each episode lasts about an hour and has been subdivided
into 5 segments for YouTube. By the way, don’t be fooled by the fact that
some modern technology of back then now of course looks less
modern. You can easily replace it in your mind by the most
recent gadgets and the story still works…

Here are links to the other James Burke TV series on YouTube:

Playlists for James Burke series

Enjoy this fantastic series!

Also, if you enjoy the series, read more about James Burke’s latest project, the
“k-web” online knowledge web:

http://knowledgeweb.blogspot.com/

Direct links to “Connections” episodes

Episode 1 — The Trigger Effect
What happens if civilization were to break down? And why
is the plough so important?

Trigger Effect 1/5
Trigger Effect 2/5
Trigger Effect 3/5
Trigger Effect 4/5
Trigger Effect 5/5

Episode 2 — Death in the Morning

Death in the Morning 1/5
Death in the Morning 2/5
Death in the Morning 3/5
Death in the Morning 4/5
Death in the Morning 5/5

Understanding nature (part II)

(see the previous post, if you haven’t read it already)

So we were looking at that fountain in the Tuileries gardens, and you have read a story about light rays and water.

What else?

There is the reddish glow of the warm sunlight. And, quite generally, the colors of things. But let us take one step after the other.

First, it was warm (it really was a nice summer day when I took that picture). What does it mean for something to be hot or cold, on a basic level? Again, suppose you meet your scientist at the Tuileries opening in the middle 1500s. What would have been the then-state-of-the-art ? Pretty lousy, it turns out!

Thermometers were developed only around 1600 (among others, by Galilei), although the principle that hot substances tend to expand had been known even to the Greek. And even if you have a working thermometer, you still wonder: what is the basic reason for something to appear hot or cold?

The first one to really get it right was Daniel Bernoulli, and he explained the idea in his book Hydrodynamica in 1738. His idea is simple and beautiful: Heat is motion. A gas is made up of billions and billions of molecules. These are not at rest, but constantly moving. Their energy is a direct measure of temperature. The faster they are, the higher the temperature. The same happens in solid substances or liquids, where constantly a kind of “jitter” motion is going on, with particles bouncing back and forth, never really at rest. When a cold body receives heat from a warmer one, its particles start moving around faster. All of this you cannot see directly, because the particles (atoms and molecules) are a thousand times smaller than what even the best light microscope would resolve, but you are witnessing the effects of this microscopic motion by feeling the temperature change.

From there on, the theory of thermodynamics and statistical mechanics continued to develop (at first rather slowly, it must be admitted). A lot of useful insights resulted. For example, in the beginning of the 19th century, a French engineer called Carnot realized that you cannot convert heat entirely into useful mechanical work. That means there are fundamental limits to the efficiency of power plants. Beginning in the middle of the 19th century, Maxwell and Boltzmann put the ideas of Bernoulli about the gas particles on a more quantitative level. All of our microscopic understanding of the properties of materials or the workings of living matter (cells) rests on the principles discovered back then.

Understanding nature (part I)

As a scientist you are always in danger of getting stuck in your tiny little corner, struggling with the particular research problem that you haven chosen at the moment. So from time to time I like to remind myself of the bigger picture. One good way of doing this is just looking out of the window and trying to think about which of the natural phenomena you see around you everyday we can understand already.

Instead of a view out of my window, here is a picture I took two years ago. It is a fountain in the Tuileries near the Louvre in Paris. You encounter it when you walk from the Louvre to the Obelisk.

 

Now that seems pretty simple: Some water, the stone of the fountain, and a dove. Also, there’s part of a chair in the foreground.

Try to think for a moment which natural phenomena enter this picture.

There’s the water with the little ripples, there’s the reflection of the sunlight, there’s the material of the stone.

But before we turn to those things, there’s the fact that you can see this image at all. That is, how do your eyes perceive an image? In fact, all of it can be explained by a very simple rule: Light rays travel in straight lines, until they hit a surface, where they are absorbed or reflected, and finally they hit your eyes. Light rays as straight lines is a very powerful concept: For example, it lets you predict how the shadows should look like, if you know where the light comes from. It’s also the reason for the phenomenon of perspective, and light rays are used in computer graphics to calculate the appearance of a three-dimensional scene.

The Tuileries gardens were created in 1564. So how much of the story about light rays was known at the time? It turns out, pretty much everything! Already Euclid had written a treatise called ‘Optics’ where he used light rays, and the Greek and Romans knew how to make some kind of lenses. Wearable eyeglasses based on these concepts had appeared in the 13th century, at around the same time that perspective was discovered in art. So light rays would have seemed a very well-understood concept to every scientist you might have met at the time strolling through the Tuileries. (see Optics page in Wikipedia)

Then, what about the water? Obviously, the most basic ideas about water were known in a qualitative way throughout history. Otherwise you would have a hard time steering ships through water. Archimedes had described the concept of buoyancy already: An object that is lighter than water will be pushed up. And Leonardo da Vinci had been drawing many sketches of swirls of water (vortices), inspecting them closely. But none of them could have given you predictions of how exactly the water currents would look like when you moved a ship or any object through water. Those insights still had to wait more than 200 centuries. People like Newton and Leibniz would first have to develop the idea to describe changes as composed of many very small steps (differential calculus). That was around 1700. About half a century later, mathematics had become advanced enough to describe in the same way small changes in space and time (partial differential equations). So in 1757 Leonard Euler wrote down the first equations of fluid dynamics, describing how the velocity field of a fluid like water (or air) would change with time. If you know the velocity at every point in space at this moment in time, you can predict it for the next moment, and from there on to eternity. (see Euler equations on Wikipedia)

With a few additional steps (like introducing friction into the equations), these equations for fluid flow have become extremely powerful. They can now be used to simulate the flow of air around the wings of an airplane, completely in the computer before the plane ever takes off for the first time. And they predict the changing weather patterns at least for a few days in advance, which is good enough to be useful. All of that came about because people were not content with just knowing in a rough qualitative way how water may behave, but tried to systematically analyze the details, a process that needed centuries because all the mathematical tools first had to be developed.

(to be continued)