One missing fact can render an interpretation completely wrong. Even in science.

One missing fact can make an interpretation completely wrong. What hope is there for reliable scientific interpretation then, given the incompleteness of science and the missing facts that necessarily entails? Why do people swear by current scientific interpretations? One should be very skeptical of all of the scientific interpretations and critical of the methods used to gather and statistically analyze the scientific data.

Consider the following example: a woman is observed guzzling alcohol at lunch and driving erratically at the end of the workday. She is confused, irritable, and frequently slurs her words. She is also short-tempered with anyone who questions her about her drinking.

The obvious interpretation of these facts is that she is having difficulties with over-consumption of alcohol.

Now for the SINGLE missing fact that changes the entire interpretation.

Her breath smells like acetone.

She is not an alcoholic. In fact, she drinks no more than the average person. In reality, she guzzles all liquids (alcohol, water, whatever) because she is extremely thirsty. She is so thirsty because she is suffering from extremely high blood sugar and ketoacidosis (both of which carry bound water right out of the body). All of the facts above are explained by diabetic ketoacidosis.

ONE missing fact changes the entire interpretation. We really should keep this in mind when we are tempted to overconfidently act on current scientific interpretations, wherein many facts are missing.

For example: most modern nutritional interpretations are wrong because the scientists have chosen to ignore or disparage the old data, observations, and isolated but meaningful anecdotal evidence. The proper procedure is to model all presumptively valid data and observations. A set of data suspected of invalidity needs to be re-tested.

Smoking and the low road to Hell

Ask anyone what the greatest danger of smoking is and you will get three answers: emphysema (E), cancer (C), or heart disease (HD).

The road to Hell starts much sooner with an apparently innocuous phenomenon: smoking-induced damage to the esophageal sphincter.

Seems so trivial. Yet, given another bad habit, such as reclining after eating, and serious GERD can result.

Let’s add poor sleep coupled with excessive coffee drinking to stimulate stomach acid production. Let’s add a stress-filled job and regular consumption of NSAIDs for headaches to wear down the protective extrinsic barrier of the stomach. Let’s add regular alcohol consumption to impair judgment, reduce liver function, and mess up the entire GI tract. Unbelievable pain every time one tries to eat.

From there one gets acid blockers. Problem solved. Aren’t docs the best?

Not exactly. Now pathogenic bacteria get a foot-hold in the digestive tract. Digestion and absorption of nutrients suffers, and the whole system gets ever more weakened and unbalanced. Irritable or inflammatory bowel is the likely long-term result. Doctors of course make it worse with oral antibiotics. As a consequence of irritable bowel, add sporadic eating to complement problems with digestion and absorption of nutrients.

Add ever more alcohol, reduced sleep, more coffee, and the potential for lethal results looms large.

This is years before E,C, or HD.

This is smoking’s road to Hell. Not even in the anti-smoking ads. When will doctors figure this one out? Next century perhaps.

Why would anyone interfere with such well-orchestrated processes?

Just prior to eating and when we eat, stomach acid is pumped out. As acidification proceeds, vitamin B12 binding proteins and intrinsic factor (which will later play an important role in vitamin B12 absorption) are secreted, food particles disaggregate, proteins begin to denature, releasing their bound nutrients, pepsinogen is secreted, and acid converts it to active pepsin, which begins digestion of the acid-denatured proteins. Acid secretion reduces the activity of acidophobic bacteria, which includes most of all of the species harmful to health, while also solubilizing calcium and other sparingly soluble minerals. As proteins digest, amino acids are released, increasing the pH, making the chyme produced easier to neutralize in the small intestine. In the small intestine, the mild acid left over from stomach digestion triggers the release of digestive juices with bicarbonate, the pH now becomes roughly neutral, protein digestion gets going in earnest, the solubilized nutrients are absorbed, vitamin B12 is released from its initial binding proteins, is bound by the intrinsic factor, then finally absorbed in the ileum.

After emptying its contents, the stomach lining is repaired during fasting for the next acidification/digestion cycle, unless of course we take NSAIDS. Failure to rebuild a good stomach lining is all but guaranteed to make the acidification process in the next cycle hurt more, increasing the likelihood that the patient will seek out acid-blockers. Vicious circle/downward spiral. Might not oral antibiotics do the same thing by denuding the bacterial layer in the extrinsic barrier?

What a well-orchestrated process! Who in his right mind would interfere with this by taking antacids, acid blockers, NSAIDS or anything of the kind.

Oral antibiotics – another insane convenience: except in an emergency, why would one want to kill beneficial bacteria as well as harmful ones? One of the best defenses against a serious infection is a well-populated, eco-balanced colon. Antibiotics should normally be taken only topically, or if systemically, by injection. Almost never ingested!

My second biggest mistake

I invented a technique that was almost as sensitive as PCR, without using exponential amplification, a feat Helen Lee of Abbott called “impossible” – even more impossible when one considers that my technique was also more quantitative, used less extensive sample preparation, and had a much faster time to first result.

I did not know at the time, and neither (I dare say) did Kary Mullis, the inventor of PCR, that a future need would determine the ultimate fate of these two techniques.

In the early 1980’s, that future need was multiplex analysis – the ability to analyze dozens, hundreds, even thousands of sequences at once – something PCR was adept at doing. After all, in human cells, thousands of genes are replicated during the S phase of the cell cycle.

My biggest mistake was not inventing PCR when I had the chance in 1979.

My second biggest mistake: not inverting the format of my own sensitive hybridization technique, called RTC, reversible target capture. With that inversion, my technique could have been as sensitive as PCR, and more sensitive than RT-PCR (which has the inefficient reverse transcriptase step). The RTC technique might still be in use today for applications requiring superior quantitation, and especially with RNA targets.

A multiplex RTC with more than a dozen targets would seem to be a stretch without alternative, more efficient capture methods, requiring fewer beads. (A bead binning mechanism that would separate the different capture beads containing the different targets after hybridization would be required. Compare the technique of Illumina).

By “inversion” I mean using the homopolymers for quantitative linear signal amplification, not capture. Today I would design two or three non-interacting capture extender probe sequences to be used in the RTC. I would do a liquid phase hybridization with the three capture extender sequences and one to ten label extender sequences. The label extender probes that hybridize to the target would be tailed with poly(dC), with about 3000 or more residues. I would add six dC residues to the 3′ end of each oligonucleotide probe to make the tailing more uniform.

After washing away excess capture and label extender probes at high stringency, I would use first amplifier probes, composed of oligo(dG)-poly(dA) to hybridize to the label extenders that are hybridized to the captured target. Oligo(dG) would be short, 10-12 nucleotides, maybe 15, because of aggregation (solubility) problems, and an oligo (dA), six nucleotides in length, would be used during oligonucleotide DNA synthesis of these generic first amplifier probes, to get better, more uniform poly(dA) tailing [and longer dA sequences if adding them to the oligo(dG) would in fact reduce aggregation problems with the sequences]. The poly(dA) tail would be about 3,000 or more nucleotides. After washing, I would hybridize oligo(dT)-enzyme complexes to the poly(dA) tails. The oligo(dT) would be optimally between 20 and 30 nucleotides for decent stability.

After stringent washing, I would use displacement hybridization of the capture extender-capture probe, to do the release steps at low stringency (so that most of the non-specifically bound background would be left on the solid support). The displacement hybridization would be facilitated by having the capture sequence and its complementary displacing sequence longer than the capture extender sequence.

Then I would bind the probe-target complex to a second solid support containing a second capture sequence. Wash stringently, release by low stringency displacement hybridization. This second capture step would reduce both major types of background noise: non-specific binding of labeling sequences and non-specific hybridization between labeling sequences and capturing sequences.

At this point, the enzyme could be detected in solution.

Finally, if necessary, capture a third time to reduce the background (non-specific binding and non-specific hybridization), and detect the enzyme label. Alternatively, detect the enzyme label after the third release.

In the case of micron scale beads as a solid support, and as a general rule, I would filter the solution after each release to trap beads and bead fragments that contaminate the eluant.

For even greater sensitivity, the target could be concentrated into a “dot” prior to detection. A microdot, a nanodot, a picodot, etc. Whatever is required. Electrofocusing would be one method of concentrating nucleic acid targets. For even greater sensitivity, the target could be cleaved into many targetlets prior to the first hybridizations.

For even greater sensitivity, “smart” systems, such as two types of label extender sequences [prepared by a long series of ligations of unique sequences] could be used along with a signal generating scheme that required both sequences to be present. For example, the second type of label extender could bind a preamplifier and amplifier that brings the chemiluminescent enhancer directly onto the target. May save a round of capture.

Sensitivity, speed, quantification, with generic linear, but powerful amplification sequences – there is a need to validate so-called quantitative PCR assays. This i-RTC method could be used to great advantage. i = inverted. There is no need to label i-RTC as quantitative; it is that in spades. The use of multiple oligonucleotides – at least 3 or 4 total is needed for improving hybridization efficiency to near 100% in strongly structured targets like certain RNAs. Every hybridization reaction is driven to completion to ensure the accuracy and precision of the quantitation. The linearity of the signal amplification ensures the most accurate quantification in the dose-response curve.

Notes:

1. Improving the RTC method: for the first multiplex capture, use a single capture sequence for convenience, or use multiple sequences for greater specificity.

2. More signal: Hybridize an oligo(dT)- branched DNA to the poly(dA) tail, and wash well. Hybridize a complementary enzyme-labeled sequence, wash well, release, recapture, wash, release and detect.

My biggest mistake

In the Spring of 1979 I took a course in Molecular Genetics at Ohio State University. The professor was a brilliant yeast mitochondrial geneticist by the name of Philip S. Perlman. In one lecture that I will never forget he talked about tools that were needed to further develop molecular biology research. He defined them in outline form, sketches really: in vitro replication, in vitro recombination, and in vitro transcription. He stressed what a really big deal these techniques would be. We could only glimpse just how important these techniques were to become. Today, the technique of in vitro replication goes by the name of PCR, which won the Nobel prize for Kary Mullis.

That very day, after the lecture, I should have gone up to Dr. Perlman, and asked him if I could join his research group to develop in vitro replication as a research tool. What a huge mistake. But I did not want to work on ANY methods project for a dissertation because they take longer, usually much longer. One has to research, develop, and debug the technique, and then use the highly developed technique to advance scientific knowledge. The latter is itself a complete dissertation project

Who cares? How incredibly stupid.  Ironically, I ended up having to develop new methods for my utterly forgettable dissertation.

Ironically, my major professor needed the in vitro replication technique that I passed on developing. He had been amplifying genes by serial selection of cells to ever higher drug levels (using methotrexate for DHFR, fluorodeoxyuridine for TS). He had to validate the new cell lines to be sure that the method of regulating gene expression had not changed when the gene was amplified more than 100-fold. How much simpler to assay the original cells using in vitro replication to improve the sensitivity of the detection of low copy messenger RNAs!