Monday, December 28, 2015

Book Smarts and Common Sense in Medicine - Why Highly Intelligent People Make Bad Decisions

In the presentation on Epistemic Problems in Medicine on the Medical Evidence Blog, I begin by highlighting the difference between intelligence (book smarts) and rationality (common sense).  Oftentimes thought to be one and the same, they are distinctly different, and understanding failures of common sense among very intelligent people can illuminate many problems that we see in medicine, several of which have been highlighted on this blog.

Intelligence is the ability of the mind to function algorithmically, like a computer.  Intelligent people are good at learning, through rote memorization, rules that can be applied to solve well defined problems.  They are also good at pattern recognition which allows them to recognize a problem type to know which rule applies to it.  This kind of intelligence is very precisely measured by IQ tests.  It is correlated with scores on college entrance exams like the ACT and SAT and with other entrance tests such as MCAT.  Of course, intelligent people need to devote the time to learn the rules to answer the questions on these tests which measure both aptitude and achievement.

Rationality, I think, is more closely aligned to the notion of common sense and it shows very little significant correlation to IQ in any domain in which it has been investigated.  Cognitive psychologists talk about two kinds of rationality.  The first is how well a person's beliefs map onto reality (the actual structure of the world), and it has been termed epistemic rationality (sometimes also called theoretical or evidential rationality).  Persons with epistemic rationality have beliefs that are congruent with the world around them and which are strong in proportion to the strength of the evidence supporting them.  Thus a physician who believes that bloodletting or mercury therapy cures disease in the 21st century would be considered to have suboptimal epistemic rationality, as would a person whose fear of Hantavirus while hiking in New Mexico is grossly disproportionate to the actual statistical risk.


The second flavor of rationality is called instrumental rationality.  This kind of rationality concerns the use of one's map of the structure of the world (epistemic rationality) to employ good judgment and decision making for the selection of goals and execution of plans for their fulfillment.  It is in particular in the search for, selection, and evaluation of goals that otherwise very intelligent people do things that appear to "miss the big picture" or lack in common sense.  Rational execution of a plan to achieve a goal does not make the goal itself rational, and some goals are more rational than others.  If we were to take a person's stated goals prima facie, and evaluated that person's rationality only on the basis of how well they acted to fulfill their stated goals regardless of how rational those goals were, we would be forced to consider successful perpetrators of terrorism, mass murder, and suicide as rational executers of their goals.  In the following examples, I will classify failures of common sense among intelligent people (mostly physicians) on the basis of where their rationality is remiss.

INADEQUATE SEARCH FOR GOALS

In medicine, certain goals are taken for granted as paramount and plans are made to pursue their achievement without an active search for other goals and an evaluation of the legitimacy of the usual goals (or the usual algorithms) in a particular case.  A rational person will actively search for goals and weigh them against one another, counterbalancing the risks and benefits of each goal to make a holistic decision about which goal(s) to pursue.  Without an active search, certain goals will rise to primacy again and again even when they don't make sense in a particular case.  For example, the dictum "save lives" may become the default goal, even when attempts to achieve it are likely to be unsuccessful or outweighed by the risks, costs, and trade-offs - and even when the patient doesn't want that goal pursued.  Alternatively, a physician may choose to add yet another expensive diabetes pill of marginal benefit to attempt to more tightly control blood sugars (default goal) in a man who already struggles with his medication schedule and can scarcely afford his existing medications.  Or, a goal that was appropriate when the patient was 72 years old (warfarin anticoagulation for atrial fibrillation) is no longer appropriate when the patient is 82 years old and has an unsteady gait and a life expectancy limited by cancer.  Goals can change.   Rationality depends upon conducting an adequate search for relevant goals and prioritizing and balancing them.  Without an adequate search for goals and evidence to compare them against one another, instrumental rationality is not possible and a default goal may be pursued mindlessly and robotically.

TOO NARROW A CIRCUMSCRIPTION OF GOALS

Related to but distinct from inadequate search for goals is the problem of selecting too narrow a goal.  Consider an intelligent and technically skilled surgeon who is called about an 89-year-old bedridden woman from the nursing home who is found to have 60 cm of dead bowel and perforation on CT scan.  If the surgeon delimits his goal very narrowly, such as the technical success of the operation and survival of the patient to the ICU, and the surgeon perfectly executes the operative algorithm for dead bowel resection (makes proper incisions and suture lines, etc), we can say that the surgeon is rational within the confines of his narrowly circumscribed goal.  Nonetheless, it is hard to conclude  that he has common sense if we know that he knows, or should know, from dozens of past cases, that patients such as this almost all die in the ICU or within six months of the operation, that they suffer miserably and have no quality of life, or that they or their surrogates would not choose to have the operation if they knew what the surgeon knew or should know from his experience.  If the surgeon does know the likely outcome of cases like this, but he ignores it and operates anyway, he is lacking in instrumental rationality because he ignores the costs and trade-offs in terms of goals that are, or ought to be, given as much consideration as the goal of successful execution of the operative algorithm.  If he has failed to recognize these trade-offs, then his epistemic rationality, his map of the world, is limited and inadequate.

UNSTATED GOALS AND ULTERIOR MOTIVES

Upton Sinclair recognized that "It is hard to get a man to understand something when his salary depends upon his not understanding it."  Sometimes goals are unknown, unstated, or outright denied.  Inferring or eliciting unstated goals can often elucidate actions that appear irrational.  For example, the surgeon described above may have an unstated goal to "take on challenging cases" to bolster his ego or his paycheck.  The trainee bronchoscopist may have as a goal "do as many bronchs as possible" during fellowship.  The cardiologist may order a low value annual echocardiogram because he profits from it.  The researcher of anti-phospholipid antibodies may look for them in every patient because of his interest in the disease (and the availability heuristic).  The hospitalist may order a CT scan in every patient with pneumonia because he remains terrified of missing PE after his patient died suddenly and unexpectedly on the floor five years ago.  The anesthesiologist may opine that the patient should remain intubated overnight in the ICU because he does not wish to recover the patient on Friday night.  The intern may persist in his belief that the patient has Familial Mediterranean Fever not because the evidence in the case suggests it but because he was the first one to think of it and if he can prove it he will look like a genius.  And the pulmonologist may deny the possibility of Wegener's granulomatosis because the nephrologist thought of it first and he doesn't want to be shown up by her.

Related to this, very intelligent people often have general dispositions such as thoroughness that lead them to approach problems in a certain way.  These dispositions become unstated goals which suffuse all of the intelligent person's goal stratagems and sometimes compromise instrumental rationality.  Among academics, traits such as need for purity, completeness, thoroughness, maximizing, and conformity may make them paradoxically less effective in a given task even if those traits make them more successful in other tasks or generally.  Take the example of the MD, PhD physician who has been accoladed repeatedly for thoroughness in his dissertations and who is enlisted to write a protocol for hypothermia after cardiac arrest.  If he writes one like this, his dispositional penchant for thoroughness has led him to make a protocol so thorough as to be virtually unusable.  Or consider the prominent cognitive scientist Steven Pinker who writes with such superfluousness, erudition, and pedantry that it makes it difficult to trudge through one of his chapters let alone one of his books.  (Here I use the redundant nouns superfluousness, erudition, and pedantry as examples of Pinkers writing, and hopefully to a much lesser extent my own.)  I don't doubt that Pinker is a genius, but his success in writing is limited by an overarching goal or tendency for thoroughness that becomes a handicap because of its excessiveness.  This tendency is revealed in guideline writers as well, who wish to include reference to and comment on every single study on a topic even if a majority of them are insufficient to guide clinical decisions.  This leads to guidelines so lengthy and cumbersome that they are largely ignored to the chagrin and consternation of the authors.  I too have been accused, in this and other blogs, of overcomplicating certain topics even if that is the opposite of my goal.

In all of these cases, what most would agree ought to be the primary goal driving decisions is subverted by a base or low priority goal that goes unstated, unrecognized, or denied but gains primacy in the goal execution plan.

CHECKMATE:  FAILURE TO CONSIDER FUTURE GOALS OR STATES OF REALITY

Medicine is a game that is best played like chess rather than checkers.  A decision maker must think not only about the immediate goal at hand, but also future goals.  Failure to do so can lead to missed opportunities or put the decision maker in a checkmate position.  Recognition of the need to consider future goals guides the practice of drawing blood cultures prior to the administration of antibiotics - this has become part of our algorithm for treating infection so that we don't lose information that we will want in the future.  But many times, taking the obvious immediate steps without considering future goals can put the unwary decision maker (and the patient) in quite a pickle, and lead rational observers to exclaim in exasperation "but what are you going to do now?  A few examples are in order.

  • The nocturnist places a right internal jugular infusion catheter in a patient with renal failure, so the day intensivist must use a suboptimal location for the dialysis catheter
  • The hospitalist gives clopidogrel to an immunosuppressed patient who needs a biopsy during the hospital stay
  • A woman with iron deficiency anemia is prescribed a PPI and scheduled for a colonoscopy 4 weeks later, which is negative
  • The surgical resident refuses to amputate gangrenous toes for suspected of causing secondary sepsis because it's "dry gangrene" ignoring the fact that the toes are going to be removed at some point regardless
  • TPN is started in a patient with esophageal varices because the endoscopist proscribed feeding tube placement; "what's the endpoint?" asked the intensivist?  At what point will we be able to feed the gut?
Attending to future goals by envisioning all the scenarios that may play out if the current plan is executed (a process called cognitive decoupling) will help prevent getting stuck in checkmate of "what do I do now" situations.

FAILURE TO ATTEND TO FEEDBACK ON THE SUCCESS OF LEARNED ALGORITHMS

Intelligent people are very good at learning algorithms, rules, and procedures.  They are less good at recognizing that some of the algorithms they have learned are incorrect or poorly suited to a given use by attending to the feedback that is available to them after employing the algorithms.  Consider the widespread use of transfusion in hospitalized patients before trials suggesting lower thresholds yield similar outcomes or in spite of data from those trials.  Why did physicians, individually and collectively not wonder why, after a transfusion triggered only by a slowly declining hemoglobin value, there were not obvious changes in any patient parameter other than the hemoglobin value?  (Without formal evidence of efficacy, transfusion would have to be a Category I (parachute) therapy with obvious benefits in order to justify its use.  It is a Category 2 therapy and RCTs are necessary to demonstrate its efficacy.)  Many things that we were taught to do fall into this category:  routine electrolyte replacement, diltiazem drips in atrial fibrillation, daily laboratory testingchronic use of  prescription opioid pain medicine, excessive use of CT to look for PE in patients with alternative diagnoses, nebulizers in all ventilated patients, Haldol administration for agitated patients, nitroprusside to reduce afterload in acute CHF, bronchoscopy to remove "mucous plugs", bedrest, maintenance intravenous fluids, and NPO orders, fine tuning PaCO2 in ventilated patients, escalating doses of benzodiazepines on CAGE protocols, titrating to Swan-Ganz numbers, wedging the Swan-Ganz catheter.  In these and many other cases, there is abundant feedback in daily practice suggesting that these interventions are either futile wastes of time or on balance harmful, but there is poor attendance to this feedback.  The algorithms are followed mindlessly, experiential evidence is ignored, and epistemic and instrumental rationality are compromised.  This is demonstrated in the Epistemic Problems in Medicine presentation to affect the entire enterprise of evidence based medicine that is founded on the RCT as currently designed and conducted - we are using a flawed algorithm for RCTs which continues to fail us but we ignore the feedback.  Similarly, in an effort to formalize concepts in physiology which often arises from what I call "physics envy" intelligent people commit the Ludic Fallacy, e.g., by trying to force blood flow in elastic conduits (in vivo) to conform to the flow of electrons in an electrical circuit (Ohm's Law - Thanks John Myburgh for this example [see #22 among the podcasts on that page called "forgotten physiology").

In these ways and many others, very intelligent people can make very bad decisions that defy common sense, i.e., very intelligent people can be lacking in rationality.  Until Keith Stanovich comes up with a Rationality Quotient (RQ) test to measure rationality and develops proven methods to teach people how to be rational, we would all do well to very carefully attend to our goals and feedback we receive after we execute plans to achieve them.


5 comments:

  1. Tremendous work Scott. Encapsulates so much of what and how we struggle in modern day medicine.

    Thanks

    S

    ReplyDelete
  2. I was able to earn the highest grade in a rather idiosyncratic college physics course.Every Friday there was a several problem test on which there was no partial credit.The type of problems were repeated every year and all of the students had access to old tests .
    So success was a function of exactly what you said -recognizing the type problem it was,memorizing the solution steps and practicing the mechanics of the solution so you would have time to check the math.Doing that well enable to me to have the highest grade (number 2 was not close)and get through the course learning almost no physics.I guess Goodhart's Law applies here as well as certainly my grades were no measure of physics learned.

    ReplyDelete
  3. Every word of this article is so enlightening and love the insight about intectualization and common sense and how it leads to bad decisions. Great post.

    ReplyDelete

Note: Only a member of this blog may post a comment.