Accompanied by his anxious wife, a middle-aged male patient arrives at a rural Michigan hospital. He suffers from serious chest pain. The physician in charge, a compassionate-looking woman, suspects acute ischemic heart disease, but is not entirely sure. Should she assign the patient to a regular nursing bed for monitoring? If it really is acute ischemic heart disease, however, the patient needs to be rushed immediately to the coronary care unit. On the other hand, unwarrantedly sending the patient to the care unit is not only expensive, but can also decrease the quality of care for those patients who need it, while those who do not are exposed to the risk of catching a potentially harmful, hospital-transmitted infection.
How humans can solve this, and related complex decision-making dilemmas in the medical world, is the central topic of this review article. For the emergency-room situation outlined above, there are different approaches to tackling the problem, rooted in different traditions in the decision sciences.
The first one is to leave all responsibility to the doctors. Yet, in an actual rural Michigan hospital under study, doctors sent 90% of patients with severe chest pain to the coronary care unit; as a consequence, it became overcrowded, quality of care decreased, and costs went up. The second approach is to try to solve the complex problem with a complex algorithm. This is what a team of medical researchers from the University of Michigan did.
They introduced the Heart Disease Predictive Instrument, which consists of a chart with some 50 probabilities and a logistic regression that enables the physician, with the help of a pocket calculator, to compute the probability that the patient should be admitted to the coronary care unit. However, few physicians understand logistic regressions, and charts and calculators tend to be dropped the moment the researchers leave the hospital.
The third approach consists of teaching physicians effective heuristics. A heuristic is a simple decision strategy that ignores part of the available information and focuses on the few relevant predictors. Green and Mehrdeveloped one such heuristic for treatment allocation. This so-called fast-and-frugal tree ignores all probabilities and asks only a few yes-or-no questions ( ). Specifically, if a certain anomaly appears in the patient's electrocardiogram (ie, an ST-segment change), the patient is immediately sent to the coronary care unit. No other information is considered. If there is no anomaly, a second variable is taken into account, namely whether the patient's primary complaint is chest pain. If not, the patient is classified as low risk, and assigned to a regular nursing bed. Again, no additional information is considered. If the answer is yes, a third and final question is asked to classify the patient. Can following such a simple heuristic enable doctors to make good allocation decisions? ( ). shows the performance of all three approaches in their ability to predict heart attacks in the Michigan hospital. As can be seen, the heuristic approach resulted in a larger sensitivity (proportion of patients correctly assigned to the coronary care unit) and a lower false-positive rate (proportion of patients incorrectly assigned to the coronary care unit) than both the Heart Disease Predictive Instrument and the physicians. The heuristic approach achieved this surprising level of performance by considering only a fraction of the information that the Heart Disease Predictive Instrument used.
Views on rationality: from unbounded rationality and irrationality to ecological rationality
What to diagnose, whom to treat, what to eat, or which stocks to invest in—our days are filled with decisions, yet how do we make them, and how should we make them? In the decision sciences and beyond, the answer to these two questions depends on one's view of human rationality. There are at least three views.
Unbounded rationality: optimization
The study of unbounded rationality asks the question, if people were omniscient, that is, if they could compute the future from what they know, how would they behave and how should they behave? Optimization models such as Bayesian inference and the maximization of subjective expected utility take this view.When judging, for instance, whom to treat, these models assume that decision makers will collect and evaluate all information, weight each piece of it according to some criterion, and then combine the pieces to maximize the chances of attaining their goals (eg, treating the needy while saving costs). Optimization under constraints, a sub-branch of unboundedly rational optimization, refers to models that do not assume full knowledge but take into account constraints, such as information costs. Optimization models are common in fields such as economics or computer science. The spirit of optimization is also reflected in the workings of the Heart Disease Predictive Instrument, which is a linear regression model that computes optimal beta weights.
Irrationality: cognitive illusions and biases
According to the second view, human reasoning is not characterized by optimization but by systematic deviations from optimization, also called cognitive illusions, errors, or simply irrationality. The heuristics-and-biases frameworkproposes that humans commit systematic errors when judging probabilities and making decisions. Although this framework differs therein from the optimization view, it still takes optimization — such as maximization of expected utility — as the normative yardstick against which to evaluate human decision making. Decisions that deviate from this standard can be explicated by assuming that people suffer from cognitive limitations, such as a suboptimal information processing capacity or insufficient knowledge. Following this view, one might argue that the physicians' large false -positive rate and below-chance performance in making allocation decisions (Figure 2) reflect the workings of their limited cognitive abilities.
Ecological rationality: fast and frugal heuristics
There is, however, an alternative to optimization and irrationality. A couple of thousand journal articles (and several years) before the heuristics-and-biases tradition became popular, Herbert Simon, the father of what is known as the bounded rationality view, stressed that optimization is rarely possible in the real world, and thus a theory of rationality needs to study how people make decisions when optimization is out of reach.Instead of relying on unrealistic optimization models and striving to compute optimal solutions for a given task, so he argued, people use simple strategies, seeking solutions that are good enough with respect to an organism's goals. He also stressed that behavior and performance result from both cognition and an organism's environment (Box 1): “Human rational behavior ... is shaped by a scissors whose two blades are the structure of task environments and the computational capabilities of the actor“ (p 7).
In the literature, a connection between the heuristicsand-biases view and Simon's concept of bounded rationality is often invoked. However, although Kahneman et alcredited Simon in the preface to their anthology (“Judgment under uncertainty: heuristics and biases”), their major early papers, which appear in the same volume, do not cite Simon's work on bounded rationality. Thus, the connection between heuristics-and-biases and bounded rationality was possibly made in hindsight.
Embracing this emphasis on simple decision strategies and their fit to the environment, the fast-and-frugal heuristics framework, has developed an ecological view of rationality through which it tries to understand how and when people's reliance on simple decision heuristics can result in smart behavior. In this view, heuristics can be ecologically rational with respect to the environment and the goals of the actor. Here, being rational means that a heuristic is successful with regard to some outside criterion, such as making a decision accurately and quickly when a patient is rushed into the emergency room. Hammond called such outside criteria correspondence criteria, as opposed to coherence criteria, which are based on unboundedly rational optimization models as a normative yardstick for rationality.
For instance, while physicians' decisions in Figure 2 appear to be systematically biased towards mistakenly assigning healthy patients to the coronary care unit, these decisions might in fact be viewed as ecologically rational, as the following court trial illustrates. In 2003, Daniel Merenstein,a family physician in Virginia, USA, was sued because he had informed a patient about the pros and cons of PSA (prostate-specific antigen) tests, instead of just ordering one. Given that there is no evidence that the test does more good than harm, he had followed the recommendations of leading medical organizations and informed his patient, upon which the man declined to take the test. The patient later developed an incurable form of prostate cancer, and Merenstein was sued. The jury at the court exonerated him, but found his residency liable for $1 million. After that, Merenstein felt he had no choice other than to overdiagnose and overtreat patients even at the risk of causing unnecessary harm. This is exactly what a vast majority of US physicians seem to do: 93% of over 800 surgeons, obstetricians, and other specialists at high risk of litigation reported practices of recommending a diagnostic test or treatment that is not the best option for the patient, but one that protects the physician against the patient as a potential plaintiff, including, for instance, unnecessary CT scans, biopsies, and MRIs, and more antibiotics than medically indicated. Similarly, in the rural Michigan hospital discussed above, of about 90% of the patients who were referred to the coronary care unit, only roughly 25% actually had a myocardial infarction. In environments where risk of being sued is high if a patient is mistakenly diagnosed and/or treated as healthy and where physicians seek to avoid potential lawsuits, it is ecologically rational for them to follow the defensive heuristic “err on the safe side,” being overcautious and prescribing more diagnostic tests and treatments than necessary. This defensive heuristic is not the same as an irrational reasoning error or a cognitive illusion, caused by people's mental limitations. But precisely because of this, as we will discuss next, there is room for change: by changing the environment, physicians can be led to rely on heuristics that are more beneficial to the patient.
The science of fast-and-frugal heuristics
Doctors and other humans cannot foresee the future, and cannot know if a diagnosis is correct for certain, or if a treatment will cure a patient for certain. Rather, they have to make decisions under uncertainty and often under the constraints of limited time. According to the fast-and-frugal heuristics research program, these decisions can nevertheless be made successfully, because people can rely on a large repertoire of heuristics—an adaptive toolbox—with each heuristic (ie, each tool) being adapted to a specific decision-making environment. By relying on a heuristic that is well adapted to a particular environment, a person can make sound decisions, often based on very little information in little time (hence “fast-and-frugal”).
There are different sets of mechanisms that help people to choose among the heuristics. The first depends on the workings of basic cognitive capacities, such as memory.The interplay of these capacities with the environment creates for each heuristic a cognitive niche in which it can be applied. For instance, the frequency and recency with which we have encountered information in our environment influences what information we remember, and how quickly we remember it. What information comes to the mental stage, and how quickly it arrives there, in turn determines what heuristics are applicable to solve a given task. A second set of mechanisms for selecting heuristics includes social and individual learning processes that can make people more prone to choose one applicable heuristic over another. Importantly, by changing the environment people can be led to rely on different heuristics. For instance, in environments with a lower risk of being sued, doctors may rely on different medical heuristics. In Switzerland, where litigation is less common, only 41% of general practitioners and 43% of internists reported that they sometimes or often recommend a PSA test for legal reasons.
Past research on fast-and-frugal heuristics
The heuristics in the adaptive toolbox can be classified along several nonexclusive categories. These categories include: (i) how the heuristic processes information (eg, assigning different importance to different predictor variables by ordering them sequentially, as in Figure 1); (ii) whether the heuristic is applicable to the social domain (eg, to doctor-patient interactions or bargaining at the bazaar); (iii) whether the heuristic is a model of inductive inference about unknown quantities and future events (eg, in medical diagnosis or weather forecasting); or (iv) whether the heuristic represents a model for decisions that are based exclusively on the contents of one's memories (eg, in quiz shows or under time pressure in a medical emergency).
Corresponding models of heuristics have been studied in diverse domains, including applied ones, such as enforcing proenvironmental behaviour or forecasting customers' activities in business, as well as in the basic sciences, ranging from animal behavior to the law, finance, or psychology., At the same time, a number of heuristics for very different tasks have been proposed: heuristics for mate search, inferences about politicians, and choices between risky alternatives, to name a few. In the applied world, heuristics have been used to predict, for example, the performance of stocks, the outcomes of sports competitions, or the results of political elections.
Heuristics in health care?
Although the science of fast-and-frugal heuristics has started to make an impact in the medical community,the heuristics-and-biases perspective still dominates as of today. For instance, Elstein refers to heuristics as “mental shortcuts commonly used in decision making that can lead to faulty reasoning or conclusions” (p 791), citing them as a source of many errors in clinical reasoning.
Some medical researchers, however, recognize the potential of fast-and-frugal heuristics to improve decisions. For example, as McDonaldwrites, “admitting the role of heuristics confers no shame” (p 56). Rather, the goal should be to formalize and understand heuristics so that their use can be effectively taught, which could lead to less practice variation and more efficient medical care. Similarly, Elwyn et al state that “The next frontier will involve fast-and-frugal heuristics; rules for patients and clinicians alike” (p 574). In what follows, we will discuss different ways in which the study of heuristics can inform medical decision making.
How practitioners and patients make decisions
In medical decision making and beyond, the science of fast-and-frugal heuristics focuses on at least three main questions. The first question is descriptive: what heuristics do doctors, patients, and other stakeholders use to make decisions? The second question is closely interrelated with the first one, and deals with ecological rationality: to what environmental structures is a given heuristic adapted—that is, in which environments does it perform well, and in which does it not? The third question focuses on practical applications: how can the study of people's repertoire of heuristics and their fit to environmental structures aid decision making?
Let us begin with the descriptive question of how practitioners and patients make decisions. Here, fast-and-frugal heuristics differ from traditional, information-greedy models of medical decision making, such as expected utility maximization, Bayesian inference, or logistic regression.
How physicians make diagnostic decisions is potentially modelled by fast-and-frugal trees, a branch of heuristics that assumes decision makers to follow a series of sequential steps prior to reaching a decision. Such trees ask only a few yes-or-no questions and allow for a decision after each one. Like most other heuristics, fast-and-frugal trees are built around three rules; one that specifies in what direction information search extends in the search space (search rule); one that specifies when information search is stopped (stopping rule), and one that specifies how the final decision is made (decision rule). In their general form, fast-and-frugal trees can be summarized as follows:
Search rule: Look up predictors in the order of their importance.
Stopping rule: Stop search as soon as one predictor variable allows it.
Decision rule: Classify according to this predictor variable.
Fast-and-frugal trees are characterized by the limited number of exits they have; only a few predictors can be looked up, but they will always lead to a decision. For instance, the heuristic shown in Figure 1 represents one such fast-and-frugal tree with four exits. Specifically, a fast-and-frugal tree has n + 1 exits, where n is the number of binary predictor variables. In comparison, more information-greedy approaches have many more exits; Bayes' rule, for example, can be represented as a tree with 2n exits. Contrary to more information-greedy approaches, fast-and-frugal trees make themselves efficient by introducing order — which predictors are the most important ones? — making themselves efficient.
A number of fast-and-frugal trees have been identified as potential descriptive models of behavior. Dhami and Harries,for example, compared a fast-and-frugal tree to a regression model on general practitioners' decisions to prescribe lipid-lowering drugs for hypothetical patients. Both models fitted the prescriptions equally well (but see Box 2). Similar results were obtained by Backlund et al for judgments regarding drug treatment of hyperlipidemia as well as for diagnosing heart failure, and by Smith and Gilhooly for describing antidepressant medication. Fast-and-frugal trees, rather than full decision trees, are also routinely used in HIV testing and cancer screening, and have been identified as descriptive models of behavior in other areas beyond medicine, including the law.
What about the patients? Even patients with higher education often rely on a simple heuristic when it comes to their own health, even when it contradicts their academic viewpoint. For instance, although most economists subscribe to neoclassical theories of unboundedly rational models and advocate weighing all pros and cons of alternatives in their research, when surveyed about their own real-life decisions about whether to participate in PSA screening, 66% of more than 100 American economists said that they had not weighed any pros and cons of PSA screening, but simply trusted their doctor's advice. They presumably followed the heuristic “If you see a white coat, trust it.” Another 7% indicated that their wives or relatives had influenced their decision.The simple social heuristic “trust your doctor” is ecologically rational in environments where physicians understand health statistics, do not rely on defensive decision heuristics for fear of litigation, and have no conflicts of interest, such as earning money, a free dinner, or another kind of gratification for prescribing certain medications or for using certain diagnostic techniques. Yet, in the American health care system, where none of these factors holds, reliance on this heuristic can become potentially maladaptive.
A heuristic's ability to account for behavioral data should not only be tested by assessing its
Saving lives by changing the environment
Not only in the United States, but also in other countries, can changing health care environments pay off, and sometimes even save lives. Consider the following example. Numerous Germans and Americans die each year while waiting for an organ donor.Even though expensive advertising campaigns are conducted to promote organ donation, relatively few citizens sign a donor card: according to Johnson and Goldstein, a study published in 2003, about 12% in Germany and 28% in the US. In contrast, about 99.9% of the French are potential donors (Box 3). These dramatic differences among Western countries can be explained by the interplay between the legal environment and people's reliance on the default heuristic. According to this social heuristic, a person should not act if a trustworthy institution has made an implicit recommendation: “If there is a default, do nothing about it.“ By German law, no one is a donor without their or their family's explicit consent. In France, in contrast, the default is that everyone is an organ donor unless they explicitly opt out. Depending on the legal environment, the same simple heuristic produces very different behavior, with very different outcomes for the general public and those who urgently need an organ. In short, the descriptive study of practitioners' and patients' use of heuristics as well as the fit between these heuristics and the environment can help in understanding not only how health care decisions are made, but how they can be improved. This leads us to the third — the applied — question.
As of writing this article, the numbers reported by Johnson and Goldsteinin 2003 have changed. For instance, in 2010 Germany had about 17% potential donors.
Can less be more?
Heuristics have various general features that render
them especially suitable tools to improve applied medical decision making. Let us point out just some of these.
As numerous studies have shown, when used in the correct environment, simple decision heuristics can surpass the accuracy of more sophisticated, information-greedy classification and prediction tools, including that of regression models or neural nets. Brighton,, for example, compared the performance of heavy -weight computational machineries such as classification and regression trees (CART ) or the decision tree induction algorithm C4.5 to that of a heuristic called take-thebest. Tim heuristic resembles the fast-and-frugal tree shown in Figure 1; it bases a decision on just one good reason. Take-the-best simplifies decision making by searching sequentially through binary predictor variables that can have positive values (1) or not (0) and by stopping after the first predictor that discriminates. In contrast to more complex (eg, regression) models that assign optimal (eg, beta) weights to the various predictor variables they integrate, take-the-best simply orders predictors unconditionally according to their validity v, with v = C/(C +W) where C is the number of correct inferences when a predictor discriminates, and W the number of wrong inferences.
Search rule: Search through predictors in order of their validity.
Stopping rule: Stop on finding the first predictor that discriminates between the alternatives (eg, possible predictor values are 1 and 0).
Decision rule: Infer that the alternative with the positive predictor value (1) has the higher criterion value.
Brighton, showed that, across many data sets from different real-world domains, it was the rule rather than the exception that take-the-best outperformed sophisticated computational machineries in predicting new (eg, yet unknown) data. In the past years, a number of studies have striven to make similar comparisons between heuristics and information-greedy tools in medical decision making. One of the most recent of these attempts, for example, focuses on fast-and-frugal trees for diagnosis of mental disorders such as depression.
Because heuristics are simple, they are transparent and generally easy to teach and to use in applied settings Consider, once more, the tree shown in Figure 1: in order to make an accurate decision quickly, the doctor has to ask at most three simple yes-or-no questions. The decision-making process is completely transparent and can be easily communicated to a patient if needed. In contrast, dealing with the various probabilities and symptoms covered by the Heart Disease Predictive Instrument is more cumbersome and complicated. As a result, the decisionmaking process seems less transparent and is likely more difficult to explain to a patient.
Teaching simple, transparent heuristics to doctors can also help them to better understand health statistics, that is, the information on which informed medical diagnoses and treatment decisions should be based. Unfortunately, there is evidence that many doctors do not know how to correctly interpret such statistics. For instance, Gigerenzer et algave 160 gynecologists the statistics needed for calculating that a woman with a positive breast cancer screening mammogram actually has cancer: a sensitivity of 90%, a false-positive rate of 9%, and a prevalence of 1%.The physicians were asked what they would tell a woman who tested positive about her chances of having breast cancer. The best answer is about 1 out of 10 women; the results for the remaining 9 out of 10 are false alarms (false positives). As it turns out, 60% of the gynecologists believed that 8 or 9 out of 10 women who tested positive would have cancer, and 18% thought that the chances were 1 in 100. A similar lack of understanding among physicians has been reported in diabetes prevention studies,“ the evaluation of HIV tests,” and other medical tests and treatments. - Making health statistics transparent can help doctors to understand them. One very simple heuristic, for instance, is to change the mathematical format in which the relevant numbers are represented. To illustrate this, consider the case of mammography screening once more. It is easy to teach physicians to translate the given probabilities into what is called natural frequencies, and to draw a corresponding tree to visualize the numbers. As ( ). shows, all the physicians have to do is to think of 1000 women. Ten of these women are expected to have breast cancer (= 1 % prevalence). Of these 10 women, 9 will test positive (= 90% sensitivity). Of the 990 women who do not have cancer, roughly 89 will still test positive (= 9% false positive rate). When the format was changed to such natural frequencies, most of the gynecologists (87%) understood that 9+89 = 98 will test positive. Of these 98, only 9 will actually have breast cancer, equaling roughly 1 out of 10 (= 10%).
Quick applicability is another important feature of wellfunctioning heuristics, particularly in emergency situations. After the attacks of September 11, 2001, the Simple Triage and Rapid Treatment, START,a heuristic that can be categorized into the branch of fast-andfrugal trees, allowed paramedics to rapidly split the victims into main groups, including those who required immediate medical treatment and those whose treatment was not as urgent.
Accessibility and costs
Well-functioning heuristics can be made easily accessible and help treatment and diagnosis even in situations where access to technology is restricted. For instance, for macrolide prescription in young children with community-acquired pneumonia, a tree with only two predictor variables—age and duration of fever—was developed as a decision aid (). This frugal decision aid turned out to be only slightly less accurate than a scoring system based on logistic regression (72% versus 75% sensitivity), but using it does not require expensive technology. As a result, this decision aid can be made easily accessible to millions of children worldwide, even in poor countries.
Simple heuristics can also aid in saving costs in rich, developed countries, as the following example illustrates. In the US, there are about 2.6 million emergency room visits each year for dizziness or vertigo.Emergency room personnel need to detect the rare instances where such dizziness is due to a dangerous brain stem or cerebellar stroke. MRI with diffusion-weighted imaging can help doctors to make this challenging diagnosis. Another diagnostic tool, a simple bedside exam, was developed by Kattah et al. An alarm is raised if at least one of three simple tests indicates a stroke.
This bedside exam represents a tallying heuristic. In contrast to fast-and-frugal trees and take-the-best, which assign more or less importance to specific predictor variables by ordering them, tallying treats all predictors equally, for example, by simply counting them. In its general form, tallying can be described as follows.
Search rule: Search through predictors in any order.
Stopping rule: Stop search after m out of a total of M predictors (with 1 < m < M). If the number of positive predictors is the same for both alternatives, search for another predictor. If no more predictors are found, guess.
Decision rule: Decide for the alternative that is favored by more predictors.
As it turns out, Kattah et al'ssimple bedside exam yields a larger sensitivity than MRI, while the false-positive rate is only slightly larger than that of the MRI, which did not raise any false alarms. In contrast to the MRI, which can take up to 5 to 10 minutes plus several hours of waiting time, entails costs of more than 1000, and is not available everywhere, the bedside exam takes little time, is less cost-intensive, and can be conducted anywhere.
In short, relying on heuristics as a tool for medical decision making can help practitioners to make accurate, transparent, and quick decisions, often while depending on little technology and few financial resources. Less information, complexity, time, and technology can be more efficient, even when it comes to medical decision making.
Why heuristics work
One reason for the surprising performance of heuristics is that they ignore information. As we have explained above, this makes them quicker to execute, easier to understand, and easier to communicate. Importantly, as can be shown by means of mathematical analysis and computer simulations,- it is also this feature that drives part of the predictive power of heuristics. Let us illustrate this with a simplifying, fictional story.
Imagine two doctors. One doctor, let's call him Professor Complexicus (PhD), is known for his scrutiny — he takes all information about a patient into account, including the most minute details. His philosophy is that all information is potentially relevant, and that considering as much information as possible benefits decisions. The other physician, Doctor Heuristicus, in contrast, relies only on a few pieces of information, perhaps those that she deems to be the most relevant ones. We can think of the two doctors' decision strategies as integration models. One of Professor Complexicus' models might read like this: y = w1x1 a1 + w2x2 a2 + w3x3 a3 + w4x4 a4 + w5x5 a5 + wixi ai + z. A simpler model of Doctor Heuristicus could throw away some of the free parameters, wiai and z, as well as some of the predictor variables, xi, such that w1x1 + z. The criterion both doctors wish to infer could be the number of days different patients will need to recover from a medical condition, y. The predictor variables, xi, could be the type of condition the patients suffer from, the patients' overall physical constitution or age, or the number of times loving family members have visited the patients in the hospital thus far.
In a formal, statistical analysis, a comparative evaluation of these two models would entail computing R2 or some other goodness-of-fit index between the models' estimations and the observed number of days it took the patients to recover. Such measures are based on the distance between a model's estimate and the criterion y. And indeed, fitting Professor Complexicus' strategy of paying attention to more variables and weighting them in an optimal way (ie, minimizing least squares) to observations about past patients (ie, the ones where one already knows how many days they needed to recover), will always lead to a larger R2 than fitting Doctor Heuristicus* simpler strategy to these observations. Put differently, when it comes to explaining past observations from hindsight, Professor Complexicus will do the more convincing job. Given how well Professor Complexicus does in explaining the time patients needed to recover in the past, it seems intuitive that his estimations should also fare better than those of Doctor Heuristicus when it comes to predicting future patients' time to recover.
Yet this is not necessarily the case. Goodness-of fit measures alone cannot disentangle the variation in the observations due to the relevant variables from the variation due to random error, or noise. In fitting past observations, models can end up taking into account such noise, thus mistakenly attributing meaning to mere chance. As a result, a model can end up overfitting these observations.
(). illustrates a corresponding situation in which one model, Model A (thin line) overfits already existing, past observations (filled circles; eg, old patients) by chasing after noise in those observations. As can be seen, this model fits the past observations perfectly but does a relatively poor job of predicting new observations (filled triangles; eg, new patients). Model B (thick line), while not fitting the past observations as well as Model A, captures the main trends in the data and ignores the noise. This makes it better equipped to predict new observations, as can be seen from the deviations between the model's predictions and the new observations, which are indeed smaller than the deviations for Model A.
Importantly, the degree to which a model is susceptible to overfitting is related to the model's complexity. One factor that contributes to a model's complexity is its number of free parameters. As is illustrated in Figure 5, the complex, information-greedy Model A overfits past observations; Model B, in turn, which has fewer free parameters and which takes into account less information, captures only the main trends in the past observations, but better predicts the new observations. The same is likely to hold true with respect to Professor Complexicus' and Doctor Heuristicus' strategies: Professor Complexicus' complex strategy is likely to be more prone to overfitting past observations than Doctor Heuristicus' simple one. As a result, Dr. Heuristicus' strategy is likely to be better able to predict new observations than Professor Complexicus' strategy.
In short, when data are not completely free of noise, increased complexity (eg, integrating as much information as possible) makes a model more likely to end up overfitting past observations, while its ability to predict new ones decreases (although see Box 4). But what matters in many applied medical settings is less the ability to explain (ie, fit) past observations than to make accurate inferences about future, unknown observations, such as about new, yet unseen patients.
Obviously, ignoring too much information and too many parameters can also be detrimental. A wellfunctioning model needs to achieve a balance between both extremes. As is known in the model selection literature, decreasing a model's complexity can eventually lead to underfitting; thus, in an uncertain world, there is often an inversely U-shaped function between model complexity and predictive power.Moreoever, besides the number of free parameters a model has, other factors also contribute to model complexity, such as a model's functional form and the extension of the allowable parameter space.
Summary and outlook for future research
Rationality has many meanings. Most theories assume that the future can be known with certainty, including the probabilities, for instance, for weighting different pieces of information, so that unboundedly rational optimization methods can define rational choice. There are two variants of these: those that assume that people's behavior can actually be modeled by this form of unboundedly rational optimization, and those that assume that people* behavior systematically deviates from it, manifesting irrational cognitive illusions, biases, and errors. This article dealt with a third perspective, which asks how people make decisions when the conditions for optimization are not met. That is the case for most real-world decisions, including in medicine. In uncertain worlds, people tend to rely on heuristics that can make better and faster decisions than complex, information-greedy strategies.
What are promising areas of future research on heuristic decision making in medicine, and in health care? For instance, while the neuronal basis of a number of heuristics has started to be explored,comparatively little research on fast-and-frugal heuristics in the clinical branch of the neurosciences, and in psychiatry more generally, has been carried out. We have mentioned only one of the few existing applications of heuristics to these fields, namely a comparison of a heuristic with a more complicated tool in diagnosing depression. Others include attempts to investigate whether patients with mental disorders or impaired mental functioning rely on fast-and-frugal heuristics. Glockner and Moritz, for example, reported that under high stress induced in a laboratory task, schizophrenia patients seemed to rely on tallying heuristics. Pachur et al, in turn, investigated the impact of cognitive aging on people's reliance on heuristics. They found that older adults are more likely to rely on a particularly simple heuristic based on recognition memory in a potentially maladaptive way. Similar results have also been reported by Mata et al, who provide evidence that older adults' limited cognitive abilities can lead them to rely on certain heuristics independent of whether the environment favors their use or not. Future research could build on these findings, addressing questions such as how environments should be designed for people who suffer from a mental disorder or otherwise impaired cognitive functioning. We hope that this review article contributes to stimulating what we take to be a promising route of future research and applications of the science of ecologically rational, fastand-frugal heuristics.