276 transcript

Dr. Jeremy Sharp Transcripts Leave a Comment

Dr. Sharp: Hello, everyone. Welcome to The Testing Psychologist podcast, the podcast where we talk all about the business and practice of psychological and neuropsychological assessment. I’m your host, Dr. Jeremy Sharp, licensed psychologist, group practice owner, and private practice coach.

This podcast is brought to you by Blueprint. Measurement-based care is proven to improve patient outcomes, but it’s historically been time-consuming and costly to implement in private practices like ours. That’s why Blueprint has developed an end-to-end solution that administers, scores and charts hundreds of symptom rating scales to give you deeper insights into how your clients are progressing. Learn more and request your demo at bph.link/testingpsychologist.

Hey everyone. I am so happy to welcome back my friend and colleague, Dr. Stephanie Nelson. Dr. Stephanie is a pediatric neuropsychologist who specializes in complex differential diagnoses. She is board certified in both clinical neuropsychology (ABPP-CN) and pediatric neuropsychology (ABPdN). She has a thriving consultation, practice, and love supporting other psychologists and neuropsychologists who specialize in pediatric assessment. Stephanie is the person I send all of my consulting clients when they want to talk about clinical stuff because she is amazing.

Stephanie is here to talk about yet another area of her expertise, something that she is presenting on in two months at ABPdN. I think that’s at the end of April.

Our topic today is bias in clinical decision-making and how to handle it when we are wrong. We don’t like to think about being wrong in our conceptualization and clinical decision-making, but research would suggest that it happens a lot. There are quite a few things that we can do to combat being wrong and deal with it when we are wrong. So Stephanie is talking through all those things.

We talk about research on clinical decision-making both in the medical and mental health professions and how often we get it “right.” We talk about the different reasons we might be wrong in clinical decision-making. So some of the cognitive biases that come up in our work. We talk about how to handle it when we’re wrong or ways to combat being wrong and increase our accuracy in clinical decision-making.

We talk about many other things, but those are the highlights. And as usual, this is an episode chock full of useful information. I hope that you will take a lot from it. I will not keep you in suspense any longer. Let’s get to my conversation with Dr. Stephanie Nelson.

Hello, Stephanie. Welcome back.

Dr. Stephanie: Hello. Nice to be here.

Dr. Sharp: I’m glad to have you as always. So we’re talking about an interesting topic, interesting to at least two of us topic, today, and that is bias in clinical decision-making. Is that a fair way to frame it?

Dr. Stephanie: Absolutely. I think so.

Dr. Sharp: Okay. There’s so much to say on this topic. I’m going to dive right into it. As most people know, the leading question is always, why is this important? For you, I’m particularly interested because I think as a lot of folks know, you have a pretty deep understanding and curiosity about a lot of topics. So I am curious why this in particular, and why you’re focusing on this now?

Dr. Stephanie: That’s such a great question. I think we’re in a weird profession. Aren’t we? This profession self-selects for people who like to be right, who got a lot of As as they went through school, and who got the answers through diligent hard work and study. And then we go out into the real world after we finished grad school when we’re 50 or however old we are when we finally get that done, and we are suddenly given real people and a DSM that maybe isn’t that attached to how the brain actually works. And we’re given tests that maybe don’t measure what they say they’re going to measure. And we suddenly start discovering that getting those A+ is a lot harder because the ambiguity that’s there is just so much bigger.

And I think like a lot of people who’ve been doing this for a while, I start thinking a lot about how often I’m wrong. I start thinking about how do I know when I’m right? How do I have any control over that? I wish that had been my first thought. Of course, everyone’s first thought is, why are all my colleagues wrong?

We’re so much better at seeing when other people make errors than we are at seeing our own, but as I’ve been doing this for longer and longer, I think more about myself and my own processes. And it just got me started. There’s a lot of reading about this. There are a lot of great books about the topic and it just became fascinating to me.

Dr. Sharp: I love that. There’s so much to unpack already, but I can just say I totally resonate with this experience. Having done this for long enough now where I’m seeing kids at like 4 and 10 and 15 and I’m like, “Wait a minute, something is wrong here. Did I miss something way back when? How has all this changed so much” or something like that.

And like you said, of course, reading other people’s reports and it’s just so easy to pick out the things that they missed. I never did that. So interesting. There’s a developmental component here. There are obviously cognitive components to it. So, I think there’s a lot to dive into.

Maybe, let’s do a frame. When you’re talking about being right or wrong in our clinical work, what do you mean by that?

Dr. Stephanie: Part of why this discussion is so hard is there isn’t a real gold standard. There is no true thing that we can compare our results to. There’s no x-ray or blood test in a lot of the individuals that we work with. So even figuring out right or wrong is hard, but I think we can all agree that there are ways where it can be more wrong.

We can be really off about the diagnosis. We can miss a problem that is brewing that we maybe could have seen. We can frame the evaluation results that we get in a way that actually isn’t helpful to families. So when I’m thinking about wrong, I’m thinking about basically being more wrong, more than I am about objective right or wrong.

Dr. Sharp: Yeah. This may be too strong of a statement. Is there an objective right or wrong when it comes to diagnosis?

Dr. Stephanie: Is there, right? We don’t really know. We could use the metric of, do we agree with our colleagues, but we know even from the DSM field trials that most of the time we don’t agree with our colleagues.

The Kappa statistics for how often we agreed with our colleagues for most diagnoses that we give to children and adolescents are in the mid-range. They’re not very good. The only ones that we even had a reasonable agreement on used to be ADHD and autism. And as the definitions of those are expanding or getting applied to more individuals, I think our agreement, even on those ones that we used to agree about quite a lot would vary a lot from clinician to clinician. So even if we look at, would we agree with our colleagues, there’s not much of an objective standard here.

Dr. Sharp: Do you know how that compares to other similar fields like the medical field for example?

Dr. Stephanie: Yes. So there has been some criticism of Cohen’s Kappa statistic, which is just sort of like how much people agree with corrected for how much chance agreement would happen. Because even in the medical field, people are often wrong. Doctors reading even objective things like x-rays or mammograms are incorrect a lot of the time.

Some of the statistics that are covered in Jerome Groopman’s book, How Doctors Think, which is excellent, say that the average level of medical error is about 20%, maybe 30%. So even doctors are getting this wrong a lot. So when we look at how our Kappa statistics compare to medicine, we don’t look as bad as we maybe think, but I think we would look a lot worse than we would want to, or than we think we do.

Dr. Sharp: Right. Well, that leads to another question, which is, how well are we doing or if you think of it like, how right are we, if you can think about it that way.

Dr. Stephanie: How right are we? We don’t have direct data because there’s nothing to compare us to. So we have to look at indirect data to see how wrong we are. We have to look at other experts in other fields or how medicine is doing, or how Americans, in general, make decisions.

And in general, experts are wrong a lot. The gold standard for this is the psychologist, Philip Tetlock. He wrote a book called Superforecasting, but this is also discussed in a lot of other areas where he looked at political experts who go on television and make predictions and say what they think is going to happen.

And he actually held them to their word and tested how well they did and said how right was your prediction? And it turns out that there was no area of judgment where experts were more correct than just a simple algorithm, like saying probably the stock market will continue to increase at the same level that it’s increasing now.

As some people have put it, a teenager with a calculator and some basic rules would do better than most expert political assessors would do in terms of making judgments. What he did find was the main difference is how confident you are. The more confident you are and the more famous you are, the more likely you are to be wrong.

So, when we look at how right we are, we’re not right very often. And the same thing has been shown for car mechanics, for financial advisors, for, like I was saying, people reading x-rays, for all sorts of different fields. There’s actually a little summary of this called Gibson’s law that the lawyers sometimes use where they say for every Ph.D., there is an equal and opposite Ph.D. who will say the exact opposite thing.

Dr. Sharp: That’s pretty good. I’ve never heard that before. It makes me think of, as some of this overlapping with the material in that book Range, I know you’ve read Range, but it’s that idea that the longer someone does something, the more of an “expert” they are, the worst they actually do, or the more they specialize.

Dr. Stephanie: Isn’t that depressing.

Dr. Sharp: Yeah.

Dr. Stephanie: Right. It’s such an amazing book. Everybody should go out and read Range because it’s an amazing book.

One of the things that it touches on is this idea of kind environments versus wicked environments when you’re trying to learn and trying to be more right. Kind environments help us become more right. A kind environment gives you immediate feedback about what you did right, what you did wrong, what you got partially right. And you can improve from there, to be hip and cool and show that I’m with it. The most current example that I can think of that’s a really kind environment is that game Wordle. Are you playing Wordle?

Dr. Sharp: Oh, sure.

Dr. Stephanie: Absolutely. Right. When you put in a word, you immediately get feedback. It says this is right, this is wrong, this is partially right. If the word is smock, I always start with the word steam. So I put in steam and it says, Hey, the S is right and in the right position, the M is right, but it’s in the wrong position and the other ones are incorrect.

That feedback is so helpful that we can usually guess which of the 160,000 words in the English language have five letters. It is within six guesses. That kind of feedback is amazing. We do not get that kind of feedback with our evaluations. If we guess, oh, it’s attention problems, impulsivity, hyperactivity, learning problems, and anxiety, nobody says, you know what, attention problems is right and in the first position and anxiety is right, but think about your priority, where you put it in that list and the other three are wrong. We don’t get that kind of feedback.

We are in a wicked learning environment. And that means our feedback is often delayed or absent or wrong or testimonials from parents who don’t have anything to compare how we did to. Everybody learns the wrong lessons from wicked feedback. We think we’re successful when we’re not. We think we’re mistaken when we’re not.

The psychologist, Robin Hogarth, who came up with this idea tells this great example of a famous diagnostician in New York City who was amazing at his specialty diagnosing typhoid fever. And the way he did it is he would stick his hands into the mouth of his patients and feel their tongue. Nobody else was doing it this way, but he had come up with this amazing way that was 100% accurate. He was amazing. Even before patients showed symptoms of typhoid fever, he could detect it. It turns out of course…

Dr. Sharp: I see where this is going.

Dr. Stephanie: Yes, he was a carrier of typhoid fever. He was giving it to his patients. He was a more effective spreader of typhoid fever than typhoid Mary was, but he was learning the exact wrong lessons from his wicked environment.

Dr. Sharp: Yeah. So that begs the question, is wicked feedback even feedback at all? I mean, where is the feedback coming from in that case?

Dr. Stephanie: That’s a great question. I think most of us think that whether we got it right or wrong is based on our feeling and the parents’ feelings, right? And our feelings are often wrong.

A great example of that is, a woman who had that tragic thing happen where she went into the hospital and the surgeon operated on the wrong side of her body. She woke up and the wrong side of her body was in bandages. She’d had the wrong leg amputated or something like that, something awful. And she said, what happened? And the director of the hospital came out and said, well, it turns out that the error was, the surgeon really felt he was on the right side of the body. So our feelings about being right are often really wrong.

Kathryn Schulz in her book, Being Wrong, which is also amazing, talks about this field like we don’t really know what being wrong feels like. In fact, she says being wrong feels exactly like being right. We feel exactly the same when we’re wrong as we are when we’re right. We feel bad once we know we were wrong, but when we’re actually wrong, when you don’t know an answer and you think you’ve got it right, you feel exactly the same as being wrong.

And when we use parents’ feelings, it’s not much better. Most people feel we… Well, we take the wrong lessons from it, I guess I should say. When parents say that they felt good about our evaluation, we say, oh, that must mean I got the diagnosis right. I’m amazing. I’m a genius. I’m a brilliant clinician. When parents don’t like our evaluations, we say, oh, they’re just resisting the diagnosis. They’re in denial. They probably have the same condition themselves and they just don’t see it. They don’t didn’t understand it. We blame them. It’s not us getting it wrong. So there’s almost no feedback we can get that helps us improve if those are what we’re looking at.

Dr. Sharp: Right. I know we’re going to talk about some ways to help here, but I want to lay a little more background if we can.

Dr. Stephanie: Sure.

Dr. Sharp: Are there different ways of being “wrong”?

Dr. Stephanie: Well, yes, there are so many ways of being wrong. If you just search cognitive bias in Wikipedia, you’re going to get this list of 50 to 70 amazing different ways that we can be wrong. And here I’m not even talking about the types of prejudice or stereotypes or those types of things that we probably learned about in grad school hopefully. I’m not even talking about the ways that our clients can be wrong and present us a story that maybe is a little bit different than the actual objective truth.

Here, we’re just talking about ways that we are going to be led astray when we think we’re being objective about data. And there are a lot of them. To narrow them down, I’ve been thinking a lot about the way that Chip and Dan [00:19:00] Heath, those brothers who write business books, wrote a book called Decisive and they talked about these four horsemen or these four villains of being wrong.

They talked about narrow framing or anchoring on one particular way of being wrong. They talked about confirmation bias and ways that we often think we’re looking for data, but really are just trying to confirm a particular story. They talk about being led astray by our emotions about a case or by our disinterest in thinking about the math of a case. Like we think about what’s unique about this individual and not about base rates or not about, what’s not unique about the individual. They talk about our overconfidence and how we often think that we’re right. And the more right we think we are, the more likely we are to be wrong.

When doctors, for example, think that they’re completely certain about a diagnosis, 100% definitely couldn’t be anything else, they are wrong 40% of the time. When they just let in 1% uncertainty, they’re only wrong 27% of the time. It’s still a lot, but even just allowing that little bit of uncertainty makes us more right.

Dr. Sharp: Got you. Would it make sense to go back and give examples of each of those four ways and how it comes up in the assessment?

Dr. Stephanie: Absolutely.

Dr. Sharp: Okay.

Dr. Stephanie: So that first one we were talking about is narrow framing. We tend to frame our evaluations and you see this all the time when people are talking about an evaluation, they say, oh, this child came in for an ADHD evaluation, or they came in for a dyslexia evaluation. That is a narrow frame. There’s only one option there. It’s a yes or no about that one option.

When we use that kind of narrow frame, we miss seeing so much else that could be happening or so much other data that is available. And we tend to anchor on that specific diagnosis. We are now looking for evidence of that diagnosis instead of looking at the data as a whole. And we know that this way of making decisions of only considering one option is really terrible.

We know this from business decisions. People have looked at all of the decisions made by organizations and found that most of the time only one alternative is considered. They’re deciding whether to expand or not, whether to buy a company or not. And when they only consider one option, the decision is wrong 50% of the time. When you add in another option, we’re much more likely to be right. They’ve also shown the same thing about how teenagers make decisions. They tend to only consider one variable.

But when we had more information and when we reframe it from, does this person have ADHD to why is this person having trouble paying attention? Why are they reporting these symptoms? What could be going on? When we expand the frame, we’re much more likely to consider the information that we get more holistically and more accurately. That first path that we took is called a plausible path, doesn’t get as sticky. We’re no longer trying to just confirm ADHD or not, and sort of cherry-picking our data and only looking at the data that supports that conclusion. Instead, we’re able to take a little bit more distance and look more objectively at what’s coming in.

Dr. Sharp: Yeah. I certainly know that we’ve fallen prey to that in our practice a lot because that’s how people present it, right? It’s like you have to start doing the work immediately before you get referrals, because when the referral comes, it’s like, this is an ADHD evaluation. It’s very rare for us when someone is ambiguous.

Dr. Stephanie: And if you think about Daniel Kahneman, the Nobel prize-winning economist who talks about a lot of these biases, he talks about that as anchoring. You anchor on that first piece of information and you fail to consider other things, but then other biases start to creep in once you’ve made that first one.

An example of another bias that creeps in is the negativity bias. Our brains pay a lot more attention to negative pieces of information, symptoms that the person is reporting. In fact, Baumeister and other psychologists who’ve studied this say we probably pay about four times as much attention to the negative stuff as to the positive stuff, somewhere between 2 and 5, and they settled on 4.

So if a patient came in and told you, I pay attention just fine at home and I pay attention just fine when I’m driving, but I’m really struggling with concentrating at work, our brains, ignore the first two things that they said, or at least discount them. What we hear is, oh, this person has trouble concentrating. And what we remember is this person has a lot of trouble concentrating. We would actually need to hear 4 or 5 times more disconfirming evidence to even consider something.

So we’d need to hear the patient say, oh, actually I can concentrate when I’m at school. I can concentrate at my part-time job. I can concentrate when I’m driving and I can concentrate just fine with friends, but I have trouble concentrating during movies. And then we might say like, oh, well maybe you just don’t like movies. Right? We’re able to consider other options at that point. But if we’ve anchored on one, the negativity bias means we are just looking at and heavily over-weighting the things that seem to confirm that diagnosis, the symptoms that seem to confirm it.

Dr. Sharp: Yeah, that makes sense. I am having a hard time staying present during our interview because I’m thinking about all of the mistakes that I’ve made over the years. Clinical decision-making.

Dr. Stephanie: That’s the terrifying and yet also liberating part of reading about how often we’re wrong is that for one, Gosh, there’s a lot of room for improvement but it can be depressing.

If we go back to Kathryn Schulz, the author of Being Wrong, she talks about how the amazing thing about our brain is not that we’re wrong so often, it’s that we don’t really see the world as it is. We see it as it could be. We can imagine all these different alternate possibilities and play around with them. Making mistakes, getting things wrong is important. It’s fundamentally human. And we actually get more information from being wrong than we do from being right.

If you wanted to know how you could improve, and someone just gives you a grade of like, you know Jeremy, you get an A- for assessment. You’re doing all right. Somewhere B+ or A- pretty good. You don’t actually learn any information from that. But if you get a bad grade, if somebody says, you know what, C- here, that’s the worst grade I could imagine getting.

Dr. Sharp: Yeah, I just had a panic attack.

Dr. Stephanie: Exactly. If you got that kind of grade, you can start zeroing in on where you’re making mistakes, where you’re thinking could be tightened up, or ways that you could improve. You actually get more information from being wrong than you do from being right.

Dr. Sharp: That’s such a good point. I will just maybe highlight that that assumes that folks want to get better and are open to feedback or to get better. And that’s what that process actually looks like, which is, again, just speaking personally, there are times when I would say, I mean, I would always say, oh, I really want to get better at this, but yet when it comes down to the rubber meeting the rod, and going through that process, that can be really challenging. And that’s a whole other set of defenses or whatever cognitive and emotional processes that might keep you from engaging in that process.

Dr. Stephanie: Absolutely. There’s a lot of research about that self-justification hypothesis or how difficult it is to examine ways that we might have been wrong and to think about that and to actually really want to devote time to becoming more right. We think in theory that we want to do it, but actually doing it is very difficult, but we do think in theory that we want to do it. And most of our colleagues do too. Most people out there actually do want to be more right. There are a few people who are not as interested in it. They’re very confident that they’re making a lot of great decisions all of the time. And that’s where they’re at. But most people are trying to learn the most lessons they can from this wicked learning environment that we’re in.

Dr. Sharp: That’s encouraging. Well, let’s talk through the next.

Dr. Stephanie: Confirmation bias?

Dr. Sharp: Yes. How does that show up?

Dr. Stephanie: Our brains recognize patterns from incomplete data. You’ve seen that example where a sentence is written and all of the letters inside each of the words, it’s in a weird order, there are all these spelling mistakes, or you’re giving the WIAT and the child’s writing upside down with a whole bunch of mistakes in it and you can still read it just fine. Your brain is an expert at making predictions in those areas and at filling in the gaps.

Confirmation bias is actually a wonderful thing in that case because you just assume you know what the word is and fill it in. But it leads us astray when we expect one thing and the data comes back another way because what we do is we ignore, or we discount the data or we interpret it in our favor.

I’ll give you two examples of studies about this that I love. One is a study where a group was shown something on the screen that looked either like a 13 or like a B. You can see how a 13 could look like a B. And they were told that if it was a letter and they identified it and push the button, they would get a drink of delicious, fresh, squeezed orange juice. And they were told that if it was a number, they would get a drink of a viscous, chunky, foul-smelling, organic veggie smoothie. And you can guess what they saw with that ambiguous data. The data was always ambiguous and people who wanted the orange juice saw it as something that would get them that orange juice and people who were worried about avoiding the veggie smoothie saw it in a different way.

Another example is if you are shown a series of dots of different colors and you have to guess, or you have to say, how many of them are blue, there are 10 dots on there, and you have to say like five of them are blue and five of them are a different color.

When you now establish a set and you start thinking about five or six of them are blue. And then researchers started slowly decreasing the number of blue dots on the page, but people had established that set. And so they wanted the same number of blue dots, and they started interpreting dots that were purple or dots that were sort of green as blue. They literally saw them as blue in order to confirm the expectation that they had already set up.

So those are some examples of confirmation bias. Clinically, we can think that when you come in with the idea of like, oh, this is autism, you start seeing all of the data in a way that is going to confirm that, and you start discounting negative data that doesn’t confirm your hypothesis. Seeing ambiguous data through your particular lens and even making up some data so that it fits the hypothesis that you already had in mind.

Dr. Sharp: That is also terrifying that we will make up data. Can you give an example of that?

Dr. Stephanie: Sure. I think that autism is a changing definition right now. So, it’s a particularly easy one to see where biases can creep in because the objective definition of it is changing so much that every clinician right now has their own definition of what it is.

But you can often hear people when they’re talking about autism, where they’ll say like, the symptoms that are in the DSM no longer need to apply. And they’ll say, oh, well, she has a lot of friends but they’re superficial friends. She is really great at making eye contact and nonverbal communication, but she’s learned that or she’s masking. She has a special interest that’s just a normal hobby like reading, which is the top hobby of 52% of people. So it’s a really normal habit, but she reads a lot.

There are all of these things where we take data that could go either way and we interpret it to mean what we think ourselves an expert in, or we even we’ll take data that contradicts what’s in the DSM and change it so that somehow it fits our preconceived hypothesis. And the more we’re an expert in something, unfortunately, the more we tend to do this.

This is known as Wittgenstein’s ruler. We tend to actually try to preserve our status as an expert in that area by finding these subtle cases that no one else could see it in. And the way that Wittgenstein’s ruler is sometimes described as if a stranger that you don’t know says everyone is a leftist, that’s probably a good indication that the stranger is a rightist more than it is telling you anything that’s valuable about the people that the stranger has assessed.

So if someone sees everything as a certain diagnosis, if they see bipolar disorder everywhere or trauma everywhere, that probably tells you more about the person and what they think they’re an expert in, than it does about the people they’ve evaluated. We also sometimes call this the hammer in search of a nail fallacy.

Dr. Sharp: Right. That’s a great example. I’m really interested to talk about the next one. I wrote it down as emotional reasoning. I don’t know if that’s the right term, or the disinterest in math, because I really like math. And I would love to see where this goes.

Let’s take a break to hear from our featured partner.

Introducing Blueprint, the all-in-one measurement-based care platform that helps behavioral health providers grow top-line revenue, increase clinician satisfaction, and deliver more effective care. At Blueprint, they believe that nothing should get in the way of delivering the highest quality mental health care to your clients. Not a lack of finances, clinicians, time, or technology. That’s why they’ve developed an end-to-end solution that works with your clinical intuition to engage clients, reduce unnecessary burden and optimize outcomes.

The Blueprint platform administers, scores, and charts hundreds of symptom rating scales to give you deeper insights into how clients are progressing. And it provides objective data that enable more productive and effective payer negotiations. That means getting paid by insurance. Learn more or request your demo at bph.link/testingpsychologist.

All right, let’s get back to the podcast.

Dr. Stephanie: There’s a great joke about math that’s sort of all over the place. The last place that I most recently saw it was tweeted by Neil deGrasse Tyson, the astrophysicist. He said, there are three kinds of people in the world, those who are good at math and those who aren’t.

I think that there are two types of testing psychologists, those who love math, and those who are happy that their last statistic class was a decade ago. And that leads us to overweight. If we’re not comfortable with math or we don’t like it, or we don’t resonate with it, it leads us to overweight our emotions, our feelings, our gut instincts in a lot of these cases.

Psychologists distinguish between the outside view, the math view of the problem, the base rates, and what typically happens to an individual in this situation it’s called the outside view and the inside view, the particular color or flavor that’s involved in this particular individual.

I can see from your expression that I’m not explaining it well. So I’m going to use an example. So you are hungry for Mexican food. And so you search for a Mexican restaurant nearby and you find one that has a 3.5-star rating. That’s the outside view. It says most people find this restaurant slightly better than average. Although the average rating is like 4* stars right now on reviews, maybe slightly worse than average. And that’s the outside view. And that’s actually a pretty good predictor of how much you will like the restaurant. That’s why we look at reviews. It’s a pretty good predictor. That’s the map view. That’s the outside view.

The inside view is when you start reading the reviews and maybe you’ll discover that the food is mediocre. Maybe instead, you’ll discover something unique about this restaurant. Maybe they serve amazing food, but 10% of people say it’s overpriced. It’s too expensive. And that’s what’s dragging down the reviews. Now you have more information and you can think, you know what? I love Mexican food and I got some money in my pocket that’s burned in a hole there. I want to go to this restaurant that’s great but overpriced, and you can make a different decision. That’s the inside view.

Most psychologists are naturally very empathic, naturally very caring, naturally very focused on the details. And so, we’re naturally pretty focused on the inside view. When our clients come in and tell us what’s going on with them, we focus on what makes them unique. And we often don’t focus on that outside view, the likelihood that it’s actually the case, or what typically happens to people in this situation. And when we look at the inside view first, we’re more likely to be wrong. We’re more likely to end up eating at a restaurant where the food is actually mediocre.

Dr. Sharp: You know, I can hear people listening who might be saying, well, that’s not fair. That turns everybody into a number. People aren’t statistics. How do we do a personal assessment that tells the person’s story without getting this inside view? How do you reconcile those two things?

Dr. Stephanie: Absolutely. Almost everyone who talks about this outside view meets resistance. There are people who immediately gravitate to it and say like, oh yeah, that makes a lot of sense. Let’s start with the base rate of, we were talking about autism, for example, autism’s base rate is about 1.7%. So how likely is it that the person has it, right?

There are some people who love to think like that. And most people who are in the position of making these kinds of decisions are not the type of people who gravitate towards the outside view. And they start making objections exactly like you’re talking about. They say, no, I need to think about this particular individual or I think the base rate that’s out there is wrong, or here’s why it’s not important in this case.

What we really want to do is find the right balance between the outside and the inside view, because we don’t want to overweight either. But what we know from billions of studies is that if you start with the inside view, you’re more likely to make a commission error to diagnose something that actually isn’t there than if you start with the outside first, anchor on that, and then allow the person’s experience to shift your percentages up and down a little bit.

We know from medical research that the commission error is basically the most common error that we make. We make a diagnosis that is not there. There’s a great example of it of like, when we see something happening that’s wrong, we say like, oh gosh, something needs to be done. This is something. Making a diagnosis is something, therefore, this needs to be done, right? We want to help. We make that commission error and the best way to not make it is to start first with that outside view.

Dr. Sharp: Yes. I’ll add a layer there that you touched on at the end which is, there is pressure. I think we feel pressure to do something. We feel pressure to be helpful. And the shortcut to being helpful is making a diagnosis because a lot of us use that as a proxy for helping or for solving a problem or whatever. And sometimes it is, but sometimes it’s not.

Dr. Stephanie: What you’re touching on is why being right or less wrong, I should say, why being less wrong is so important. We do feel this pressure to diagnose, but when we diagnose, we are saying, we are centering the problem in the individual person who is coming into our office. We are saying, you have the problem. Your brain is the problem here. I’ve looked at all the other factors. I’ve considered the entire environment. I’ve considered alternatives and yup, your brain is the problem.

Sometimes that’s the right answer, but that is an awesome weighty responsibility. When we are saying that we owe it to our clients to be right, because you can think of examples, like a child who is in a whole language learning environment. The whole language is a terrible way to teach people to read, right? It doesn’t teach people anywhere close to phonics. And if we take that person and say, you have dyslexia, we’re saying, you’re the problem here.

When we do that, we take away the ability to look at, well, maybe the whole language is the problem. And we also are creating this two-tier system where we say, oh, we’ve helped that particular child. We’ve said, okay, you have dyslexia and you need to be taught systemic phonics, but we leave everyone else behind and we don’t help them fix the environmental problems that might be happening there.

And we also create this system where the people who can come into our office who are mostly white, mostly higher SES, mostly more resourced, get the diagnoses that we personally like and that we think are milder and everyone else who isn’t like that gets diagnoses that we don’t like as much, or doesn’t get a diagnosis at all or doesn’t get accommodation at all.

And then over time, you have the system where the people who are more resourced get these nicer diagnoses that we then we’ll all start arguing. Oh, those aren’t even really diagnoses at all. They’re just differences and reasons why the person should get some accommodation and everyone else either doesn’t get the accommodation, doesn’t get identified, or doesn’t get the support that they might need or gets the bad diagnoses, like oppositional defiant disorder, personality disorder, or things that we have our own stigma around.

Dr. Sharp: Yeah. I know that we could continue down that path for a long time. That’s a separate issue. I don’t know the timing of the episodes, but I just spoke with Jordan Wright about context-driven conceptualization. So at some point, either before or soon after this episode, whenever you’re listening, go look for that because that’s all we talk about is the context and the environment and how that can influence conceptualization.

Dr. Stephanie: Wonderful. That sounds amazing.

Dr. Sharp: I want to circle back to the math part though. You talk a lot about base rates, which is awesome. This is a dumb question, but where do we find these base rates and how do you actually implement base rates in clinical decision-making? Because it sounds great. It’s like, oh yeah, base rate. That’s what I should start with. That sounds really informative. Meanwhile, I’m in the room doing an assessment with someone and I forgot that base rate that I looked up two months ago or six months ago, or five years ago. And what do I actually do with that?

Dr. Stephanie: I don’t think there’s a person on earth who has them all memorized. Sometimes people think that I wander around with all this information in my head and I’m like, no, I have Google. I just Google. I look for a meta-analysis, so I’ll just type in ADHD prevalence meta-analysis children, something like that, and get my base rate.

Eventually, I’ve looked it up enough that I remember a lot of them, but I don’t carry them around that much. Our intuitions about base rates tend to be strikingly wrong. For example, what do you think the base rate is of teenagers who give birth before they reach adulthood?

Dr. Sharp: I’m just thinking. I’m trying to give a real answer, the answer that will be right or wrong because I want to be, I don’t know, 4%

Dr. Stephanie: Yeah. So when you think about that number, it’s 2%, you were pretty close. The average American who doesn’t know guesses 20%, which is kind of crazy. The average psychologist guesses less than 1% because we don’t see a lot of teen mothers in our office, I think most of us. And so the answer is 2% and then you start thinking, okay, well, how does that compare to other base rates? That’s actually three times as high as the current base rate of girls with autism, for example, which is right now believed to be 1 in every 145 girls, which is 0.7%. So, three times as many teen mothers. I don’t think about that. That does not occur to me in everyday life.

So we’re surprisingly bad at intuiting things about base rates. So I don’t rely on my intuition at all because it’s wrong. I look it up and I say, what’s the likelihood that this is what’s happening. And I try and break it down and say, okay, when I’m thinking about obsessive-compulsive symptoms, for example, I’m maybe looking very specific at how common are obsessive-compulsive symptoms in someone who I’m worried about psychosis, right? So I’m looking for very specific base rates and I just Google it and try and figure that out to anchor myself to what’s most likely, and then move it up or down depending on the specifics of the case.

Dr. Sharp: That’s good. We’ll hold that. I know we’re going to talk more about how to be less wrong, but I want to talk about that last bias that we tend to implement sometimes- the overconfidence thing.

Dr. Stephanie: Overconfidence. Adam Grant in that book, Think Again, calls it, standing on the summit of Mount stupid, where we’re more likely to want to talk about something or to think that we’re good at something as our experience increases a little bit, but right there, that’s the summit of mount stupid. When we think we know a lot about something where we’re probably the most wrong.

And experts actually, the more they know about it, they actually will go through this period where they feel less confident. They feel less certain that they have it right until they reach a point where they’re so versed in it. They’ve been setting it for like 30 years and then there’ll be willing to talk about it again.

This overconfidence leads us to feel really right while we’re really wrong. Kathryn Schultz compares it to that moment when Wile E. Coyote steps off the cliff. And at that moment, while he’s still running, he thinks he’s going to get that Roadrunner until he looks down. We’re all Wile E. Coyote right off the edge of the cliff before we’ve looked down. We feel really confident when we’re absolutely 100% wrong.

Dr. Sharp: Again, discouraging. It’s so discouraging. So, that leads me to ask, how do we behave when we’re wrong? There’s a lot of this overconfident. We think we’re right. So then where does it go from there? Do we ever find out we’re wrong? What do we do when we actually find out we’re wrong, all these things?

Dr. Stephanie: We do eventually start learning that we are probably not as right as we think. We get over that summit of mount stupid. That’s another way of phrasing the Dunning-Kruger law that when we’re novices, we tend to be more confident and as we become more expert, most of us, not all of us, most of us do start thinking more about how we can be more right and when we are confident when we shouldn’t be. Not everyone.

The more praise you get for being confident and the more you feel like it has worked for you as a social tool, you may just become a very confident person who is very competently wrong a lot. But most of us do start figuring out that something is going on and start looking around for some strategies for how to get better at this.

Dr. Sharp: That’s a nice segue. I know we’ve touched on this a little bit, but what can we do to be less wrong?

Dr. Stephanie: One of the nice things about the Heath brothers’ frame of those four horsemen is that even if we just address those, we’re probably going to make a huge dramatic improvement in being less wrong. And each of those problems suggests the opposite.

So instead of using a narrow frame, they talk about widening your options, widening the number of things that you’re considering. Instead of looking to confirm your preexisting hypothesis, they talk about reality testing your hypothesis and actually looking for disconfirming data. They call that reality testing your assumptions.

They say, instead of being emotional about things, attain some distance. Look at the math, look at the outside view, start with those more objective features and be suspicious of why your emotions or your gut might be leading you in a specific direction. They call that attaining distance. And then, the fourth step, instead of being overconfident, actually start preparing to be wrong or start preparing for how you can be right about some things and wrong about others.

In my consultation with clients, I’ll often talk about like, we might get the diagnosis wrong. The literature suggests we have a good possibility of being wrong about this specific diagnosis. So let’s try to be right about what this individual needs to be successful moving forward. So that’s an example of preparing to be wrong and looking to make something good out of it anyway.

Dr. Sharp: Yeah. I’m thinking back to the first thing that you said, the widening of the frame. I talk with our trainees a lot about this. We have interns and post-docs here in the practice. Our interview form or template is super comprehensive. I always say to them, we ask all these questions, no matter, like, even if somebody comes in and they’re like, I just want to know if I have ADHD, we ask all these questions. I wonder, is that an example of widening the frame, like when you’re ruling out other concerns?

Dr. Stephanie: Absolutely, if you honestly, and rigorously consider that data. A lot of us have 20-page long intake forms, and then just zoom in on the stuff that we want. That’s another example of that anchoring bias. But if you really do consider all those other options, then that is a great example of widening the frame of considering more options.

The Heath brothers used this example of, instead of a checklist, they call it a playlist- a list of things to consider. When someone comes in with a question of ADHD, for example, you might have a playlist of other things that you would consider.

The way that I do it is I try and make myself think of 20 other things that it could be, and just jot them down. Because I’ve been doing this for a long time, they look kind of like a playlist, right? If the question is attention, I’m probably going to list the same 20 things as I did last time. Maybe slightly different for this individual person, but I’m just thinking what else could it be?

And what I’m trying to do is make myself generate the ideas because we’ll pay attention to that whereas I’ll disconfirm anything that the patient told me, or that’s just written on my form. When I actually make myself can deeply consider these ideas, we have a better chance of widening that frame.

Dr. Sharp: Yeah. As you talk about these strategies, they sound great. They sound wonderful. And I just think about real life and how do we actually put these into play? And maybe there’s not an answer. Maybe it’s just, you have some professional responsibility to train yourself to do this differently, but I wonder, are there ways to remain mindful and concretely put some of these strategies into play?

Dr. Stephanie: I think that we need to take a few moments to engage our system to the brain. It’s not just doing the system want what our gut tells us, and also not just putting in place rote checks and balances. The last thing we would want would be an alphabetical list of DSM diagnoses that we could look at because we know that the things that the top of the list are the ones that we look at.

There’s actually a reason that ADHD, autism, and anxiety are three of the most commonly diagnosed things. And just to do with the numbers. They all start with A, and we heavily weigh things at the top of a list and we tend to alphabetize. So we have to be a little bit mindful about these things, but if we get in the habit, Eric Johnson in his book, about making good decisions, talks about this as making the decision more fluent.

When decision processes feel hard, we don’t do them. So we need to make choices more fluent like you’re talking about. So we have to do these little things that become really automatic for us but are still requiring us to think these habits of thought to help generate other ways of being less wrong.

Dr. Sharp: Right. What is the role of other people in this process? Do you know much about that? Does it help to consult or is that just activating another groupthink or something?

Dr. Stephanie: Yes. This is the best part of the part about reality testing your assumptions. The antidote to confirmation bias is other people. Getting more perspectives, getting diverse perspectives. So, if you can get a consulting group that will help you be so much less wrong. It has to be the right kind of consult group. It can’t just be other people who think the same way that you do or make decisions the same way that you do, because then it just becomes group think. And you’re just an echo chamber.

It has to be truly diverse people who not only have different levels of expertise but think differently than you do. There should be some members of your consult group that you kind of want to strangle. Then you’re going to be making more accurate decisions.

And the group has to function in a particular way. In particular, they have to reality test your assumptions. They have to challenge your thinking in some way. There are a lot of different ways to think about this group. You can think of Abraham Lincoln’s team of rivals, or you can think about, if you’ve seen the show, The West Wing, they have a great couple of episodes about the red team that’s trying to challenge a particular view or her Majesty’s loyal opposition in British politics. Adam Grant calls it your good fight club. Tasha Eurich calls it your loving critics, right? Your trusted naysayers.

Another way to think about it is the devil’s advocate. I always thought devil’s advocate was just a made-up phrase, but it actually used to be a formal position in the Catholic church. It was called the promoter of the faith. The promoter for the day. Their job was to argue against someone being made into a Saint.

Like, if mother Teresa is brought before the board, their job would be to argue why she shouldn’t be made into a Saint. And this was a deeply committed Catholic who wanted to protect the faith and maintain its integrity. And nobody really wanted that job, but a loyal person would try and poke holes in the argument and think, should we really make this person a Saint?

When the Catholic church eliminated that office in 1983, since then, saints have been canonized at a rate 20 times faster than they were before. So when we don’t have that devil’s advocate, do you make the commission bias, and you make people into a Saint who you wouldn’t have before?

Dr. Sharp: I love that story. Thanks for that. There are so many examples of what we’re talking about here across the literature. It’s coming up. I’m rereading a Radical Candor right now, which is Kim Scott, about management and leadership. And she talks about this a lot as well. In a functioning workplace, just from a business standpoint, you need folks who can safely challenge your ideas and that helps with…

Dr. Stephanie: Present ways of thinking about it that you wouldn’t have. You need people who you trust and people who love you and people who are different than you.

I actually went to college with a guy who is a social psychologist now, Sam Summers. And he has looked at juries. And he’s found that when you have the choice between a jury that’s entirely white, which is the case a lot of the time, versus a jury that just has any level of diversity in it, but every important metric, the diverse jury makes a better decision.

It’s not just because diversity is good. It’s because different perspectives tell you what you… you can’t know what you don’t know. So when you add in these different perspectives, you’re increasing the knowledge base that you’re using to collectively make this decision and your decision will always be better.

Dr. Sharp: I love this. I know, as we start to wrap up, there are two more things that we could touch on. One is that we’ve established that we’re wrong a significant portion of the time. There are some things that we could do to try to lessen the frequency of being wrong, but we’re going to be wrong. So what do we do when that happens? Or what should we do maybe is a better way for that?

Dr. Stephanie: Because we know what we do, right? We cringe and fall, we retreat into shame, we say we didn’t do anything wrong. The DSM made me do it. Or we justify, or we blame the victim or we claim we couldn’t help it. Sometimes we’ll admit we were wrong, but we’ll say it like, oh, mistakes were made. That’s the title of a fantastic book about self-justification, Mistakes Were Made 

Dr. Sharp: Yes, but not by me.

Dr. Stephanie: Exactly. Or we’ll say, I made a mistake, but it wasn’t typical of me or it didn’t influence things, right? Obviously, the people who we make the decisions about disagree with us entirely, and they think we could have helped it, it wasn’t mindless. It was typical of us. It did have lasting consequences.

So what the authors of that book, Mistakes Were Made (but Not By Me) which is, Carol Tavris and Elliot Aronson say we should do is, we should just admit when we’re wrong without feeling the need to justify or defend it, without saying, oh, this is what led me to that.

We should instead just say, oh, here’s what we did. I’m sorry that that happened. Here’s what I’m going to do so that it doesn’t happen again to someone else. Patients find that third part really important. And I think we find that their third part really important. That part where we say, here’s how it won’t happen again because now we’re learning from our mistakes.

There’s a famous saying, I think it’s by Richard Feynman, but I can’t remember off the top of my head where it’s like an error isn’t a mistake unless you stubbornly refuse to change your mind. So if we just admit the error and don’t let it become a mistake that is perpetuated moving forward, we will be a lot happier.

I mean, the mistake is going to come out anyway. It’s easier to fix when we’re early on and we get more credibility. Our patients actually feel like it humanizes us when we say I was wrong. When we say those really difficult words, I don’t know, they still see us as competent, but now they see us as competent and human.

Richard Freeman who writes about medical stuff for the New York Times, he’s a doctor, he said, in the end, most people will forgive their doctor for an error of the head but rarely for one of the heart. And so, if we can just admit that we were wrong or that we don’t know, and here’s what we’re going to do and how we’re going to move forward, that’s the way to get out of that shame trap. We are not our mistakes. We are not our errors. We are smart, competent people who because we’re smart and competent make mistakes.

Dr. Sharp: Yes. That dovetails well with Bernay Brown’s work as well.

Dr. Stephanie: Yes, absolutely.

Dr. Sharp:  She’s everywhere. But to combat that shame, just put it out in the open.

Dr. Stephanie: To separate that out, right? When we think about being wrong, it can feel so horrible. Tavris and Aronson called America a mistake phobic culture because we’re so terrified of mistakes in this culture. We think that it means that we could have been better.

Dr. Sharp: Yeah. Well, I hope that even talking about it in this format normalizes mistakes a little bit. I mean, it certainly has for me just to hear some of the research around. I was tempted to say we’re just not good at this, but I don’t know that that’s the case. We work in a very ambiguous profession where there is not a lot of objective data that we can tie to the decisions we’re trying to make. And so, we’re doing our best.

Dr. Stephanie: And we are better than amateurs. When we look at expert decisions versus college students who are given a little bit of information, we do make better decisions than that. So we’re not making as bad of decisions as we could.

We also have to remember that our decisions, while we want them to be less wrong, they’re not the only thing that we do that’s helpful. There is a lot of magic to many of the things that we do. Listening to someone’s story with our full attention, helping them rewrite their narrative, connecting them with solutions or resources, giving them empathy, eliminating some things that are very wrong.

There are a lot of ways that we help that we don’t want to let this focus that we’re doing right now on the fact that we’re wrong about some of it negate how much we’re doing that is wonderful as part of an evaluation. This is just an area where we could improve and there are tools out there to help us with it.

Dr. Sharp: I like that. I feel like you always do a great job balancing these perspectives and ending on an encouraging note. So I appreciate that.

Dr. Stephanie: Thank you.

Dr. Sharp: So this is great as always, I know you’re talking about this topic again in a lengthier presentation at the ABPdN conference coming up at the end of April. So, if folks want to hear more about this, then definitely check that out. I’ll link to it in the show notes, as well as all the books we listed. This is the most book-heavy podcast I think I’ve ever done. Lots of books in the show notes.

Dr. Stephanie: Well, you have caught me in, as I was saying, my deep dive phase. So I’ve recently read about 30 books on this topic and read over 100 studies. So I have about 30 pages of notes about this that I’m going to try and condense into that presentation. So if anyone wants to dive deeper into this just cause they like to nerd out on it, I got 30 pages of notes and you can buy me a cup of coffee and we’ll chat.

Dr. Sharp: Nice. Well, I appreciate you sharing some of those pages of notes with us today. Always a pleasure to talk with you and look forward to the next time.

Dr. Stephanie: Great.

Dr. Sharp: All right, y’all. Thanks as always for listening. I hope you found this helpful. There are so many resources and books in the show notes. So definitely check those out if your curiosity was peaked by our discussion today and the mention of several books that Stephanie is getting her information from.

The testing psychologist mastermind groups are continually enrolling. We have just launched new cohorts of the beginner, intermediate and advanced practice groups. So enrollment is starting over for groups to begin ASAP. If you are at any stage of group practice and you’d like some group coaching and accountability, this is the place for you. You can get more information at thetestingpsychologist.com/consulting, and you can schedule a pre-group call to figure out if it’s a good fit.

All right. That is all for today folks. I will be with you next time. Take care.

The information contained in this podcast and on The Testing Psychologists website is intended for informational and educational purposes only. Nothing in this podcast or on the website is intended to be a substitute for professional, psychological, psychiatric, or medical advice, diagnosis, or treatment.

Please note that no doctor-patient relationship is formed here, and similarly, no supervisory or consultative relationship is formed between the host or guests of this podcast and listeners of this podcast. If you need the qualified advice of any mental health practitioner or medical provider, please seek one in your area. Similarly, if you need supervision on clinical matters, please find a supervisor with expertise that fits your needs.

Click here to listen instead!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.