278 Transcript

Dr. Jeremy Sharp Transcripts Leave a Comment

[00:00:00] Dr. Sharp: Hello, everyone. Welcome to The Testing Psychologist podcast, the podcast where we talk all about the business and practice of psychological and neuropsychological assessment. I’m your host, Dr. Jeremy Sharp, licensed psychologist, group practice owner, and private practice coach.

This podcast is brought to you by Blueprint.

Measurement-based care is proven to improve patient outcomes, but it’s historically been time-consuming and costly to implement in private practices like ours. That’s why Blueprint has developed an end-to-end solution that administers, scores, and charts, hundreds of symptom rating scales to give you deeper insights into how your clients are progressing. Learn more and request your demo at bph.link/testingpsychologist.

Hey, everyone. Welcome back. Glad to have you. Today, I have Dr. Cecil Reynolds who you have likely heard of as the guest on the podcast.

Dr. Reynolds is an Emeritus Professor of Educational Psychology, Professor of Neuroscience, and a Distinguished Research Scholar at Texas A&M University. He is the author of more than 45 commercially published psychological tests including the BASC-3, RIAS-2, TOMAL-2, and the PdPVTS. Last fall, he was named in the top half of Stanford’s list of the top 2% of scientists in all fields worldwide. He currently writes and practices Forensic Neuroscience in Austin, Texas.

Like I said, you have likely heard of Cecil. He has his name on many measures that we have used and continue to use. He is here to talk with me all about performance validity testing in kids which is a topic that we have covered before on the podcast with Dr. David Baker [00:02:00] but this time we take a little bit of a different angle and really dig into the value of PVTs: why using PVTs makes us better clinicians, how it informs our clinical decision-making. We talk about the interpretation or use of PVTs in situations where there is a suboptimal effort that is not necessarily deliberate or conscious. We talk about many other topics, but these are some of the big ones.

As always, I think there is a lot to take away from this episode. Cecil has been in the field for a long time and has done so much as far as test development research and practice. I’m very grateful to have him here to share his expertise. So let’s get to my conversation with Dr. Cecil Reynolds.

Hey Cecil, welcome to the podcast.

Dr. Reynolds: Well, thank you. I appreciate the invitation and the opportunity to speak. I just want to say thank you to the folks that are listening. I’m always flattered when people show up to listen to me talk about things that are of interest to me because as my wife likes to remind me periodically, it’s mostly just an interesting community. So I’m really flattered when people show up.

Dr. Sharp: Absolutely. I think, in this case, the material we’re going to be talking about will be interesting to many more people than just you. So at least, in this case, you can tell your wife, hey, this is interesting to more people. I’m honored to have you here. I feel like you are just a [00:04:00] giant in the field. You’ve done so much work. We see your name on so many measures and papers and materials that we look at. So I’m just glad to have you and excited about our conversation.

Dr. Reynolds: Well, thanks. I really do appreciate you having me here.

Dr. Sharp: Of course. Well, there are many things we could talk about, but as people heard in the introduction, we’re talking about performance validity testing for kids. We’re going to dive a little deeper into this topic. I always like to start with this question of why this, why now, especially for you where you’ve done so many things, why focus your attention on this?

Dr. Reynolds: Well, let me give you two broad reasons, and then if you want to dig deeper into that, we can. The first is that it’s very interesting to me because it’s the next thing. It’s not just the next thing that I’m involved in because being ADHD, I’m involved in a lot of things all at the same time. I have to multitask. I can’t work on the same thing all the time.

It is one of the next things that I’m doing, but that I think the field ought to be doing, taking more seriously and doing a lot more of. And that’s in consult with some of the recent professional white papers and other guidance we’re getting from our professional organizations, the American Academy of Clinical Neuropsychology consensus paper on performance validity testing that also deals, to some extent with performance validity testing, greatly expanded last year the section of performance validity testing with children [00:06:00] and has created a much greater emphasis on that and has pointed out something we pointed out about seven years ago when we started a major project in that area- that this isn’t just an issue in forensic or forensically related exams.

It’s an issue in everyday practice to the extent that it can improve practice and make you a better examiner, which clearly I think it can, it certainly made me a better examiner to include these measures for children. Then we all ought to be pursuing it and including it as a regular part of our exam. And that’s why it’s so interesting to me. For most of my tests, I’d say almost all of them, and I am on my 50th commercially published test that came out last year.

Dr. Sharp: Congratulations.

Dr. Reynolds: Thanks. They have all been inspired more so than in any other way, by my own contact with patients and finding a need to be able to assess something that I couldn’t sense and feeling like I want some numbers for this. I don’t want to just fly by the seat of my pants or being dissatisfied with the measures I was using and feel like we can do this better.

And so, performance validity testing in children for me came about in large part the same way. It kept gnawing at me for years that we’ve got to be able to do this and do it better. We’ve been doing it a little doll for a long time and [00:08:00] it’s going to be valuable with kids. So I got together with two colleagues and started really digging into that literature. And it seemed clear that not only was there a need, but we had sufficient literature to begin to fill that need.

Dr. Sharp: Yeah. It’s a nice intersection of circumstances. I’m going to start with a dumb question. And that question is, are there any PVTs specifically designed for kids aside from the one that you developed?

Dr. Reynolds: There is one published by Paul. We looked at that one and our impression of it was that it was a little too easy and too recognizing. It also was a single test format. And so we felt like it didn’t meet the need of the exam process for kids. In fact, we don’t think any single performance validity test can meet the need of the exam a child would love. But that was the first one that we’ve seen that was specifically designed for children.

What people had been doing when they did perform validity testing with children, which is not something that’s done very often still, it was simply taking an adult performance validity test, giving it to children, and assuming that the stimulator and the materials were equally appropriate and that the adult [00:10:00] cut scores will apply. And it turns out my reading of the research is that that’s controversial and that age-related cut scores improve our sensitivity and specificity with that.

There are people who disagree with me on that and think that the literature is sufficient to say that the adult cut scores do apply, but there are other peer-reviewed published papers that say, well, no, those cuts scores on the adult measures for children are not reliable and we do need to be age-adjusted.

So as I looked at that, once we realized that these were adult tests where the stimulate were not designed specifically to be attractive to children, to be within the whole children’s interest, and we look at the cut score issue, even if it’s just controversial, I think the evidence sides on the basis of tests but if we just say, okay, so your evidence is equally divided. Why would you do that?

If there is available something specifically designed for children, both in copy and format with validation across multiple clinical groups that has informational base rates across multiple PVTs, all of those things with age-adjusted cut scores have been validated and cross-validated, why take something and try to adapt it from an adult environment?

Dr. Sharp: Right. Let me ask a naive question. [00:12:00] It seems relatively intuitive to say, yeah, we should have a test that’s truly developed for kids. What is the argument on the other side that it would be okay to use these adult measures that have just been down?

Dr. Reynolds: The key argument is, and the fundamental basis of the argument is correct, that the adult measures have been around many more years, decades in some cases. And we have a cornucopia of research on them. The instruments designed and developed specifically for use with children are only a few years old. And the bulk of the research on those is in the test manual, and we don’t have a lot of independent peer-reviewed research on those.

So the argument has been, well, if you can’t trust the data in test manuals, then you can’t trust the publishers. You can’t trust the authors. You can’t the test. So I don’t find those arguments convincing, but the statement that there is a lot more research on the adult measures is absolutely true. There is. And that’s basically the argument that I get when I talk about the need to use the ones developed specifically for children is well too new. I’ll switch to those in ten years.

Dr. Sharp: That’s a long time.

Dr. Reynolds: It is a long time.

Dr. Sharp: It’s a long time to wait.

Dr. Reynolds: And I think [00:14:00] it’s unnecessary, particularly when there is a controversy of the adaptation and actually, they’re not even adult developed measures. They are just taking as they are. That’s how they typically work.

Dr. Sharp: Yes. I want to ask you a question. And some people may be familiar with my interview with David Baker from a few months ago. We talked about PVTs in a pediatric population more and from a statistical standpoint, we got into a little bit of practice, but I’ll ask you a question that I asked him. And then I think we can dive a little bit deeper into some of these things.

You mentioned a bit ago that using PVTs with kids is coming out of the forensic realm. I mean, it should be almost a matter of course in our evaluations. Is that to that point where you would say we should be giving these to every kid that we’re assessing?

Dr. Reynolds: Absolutely. I can’t think of any reason not to, and I can think of multiple reasons to do it. So why wouldn’t you? If it’s a matter of time, the child PVTs are very quick. PVTs don’t have to take 30 minutes. The one that I developed with my colleagues and if I can acknowledge them because if we get into that and get into the research on it, we’ll be sure folks know this is not just my work. We had a team of folks, actually, 3 other neuropsychologists that I worked on with this for about five years Dr. Robert McCaffrey and Dr. Robert Leark, Dr. Julie Lynch and they’re wonderful folks to work with. [00:16:00] All are practitioners and academics. well published, and also have clinical practices and see kids. So it was a great team to work with. As we talk about that, a lot of things, when I say we, that’s who I’m talking about, so we’ll be clear to acknowledge them and their efforts.

Dr. Sharp: Thanks for that. As you said, yes, we should be using these with all of our kids. And just for discussion’s sake or validation’s sake. Statistically speaking, I’ll see if I can remember this, I know that the rate of non-credible effort in a clinical population, especially, TBI and other specialized populations are relatively high, right? It was 20%, maybe 25%. In a non-TBI population, is it maybe down around 5%, something like that?

Dr. Reynolds: Well, when you talk about non-credible performance, I’m going to force you to be more specific. I’ll even do that for you if you want.

Dr. Sharp: Thanks. If you want to take that one, you’re welcome to do so.

Dr. Reynolds: Let me talk about that a little bit because the typical comprehensive exam with a child is particularly if it’s a neuropsychological exam, those tend to be longer, but even schools exams take place over the course of multiple days and can involve 6,9, 12 hours of face-to-face testing. So I would say, that in the [00:18:00] majority of cases in which you do that, you are getting maximum effort from children on some part of your exam and not on others. And at the majority of the time, when you’re not getting maximum effort or best effort which is required if you’re going to interpret tests the way we’re told to interpret them, you have to get the best effort. Can I take a little tangent on that for just a moment?

Dr. Sharp: Yeah, I’d love to highlight that. I think that’s overlooked a lot of the time. We assume we’re getting maximum effort or forgetting that we need maximum effort to interpret the test the way it was meant to be interpreted.

Dr. Reynolds: Exactly. Almost every test manual out there talks about the need to develop rapport with the examinee and to encourage them to reinforce them for effort, not for correctness, but for effort. If you selectively reinforced them for correctness, it changes the scores in ways that deviate from the standardization, but even standardization examiners are told to reinforce the examinee’s effort.

So there are generally two big classifications of tests. We talked about maximum performance tests and typical performance tests. The typical performance test is something like a personality instrument, behavior rating scales, things like that where the question we’re trying to answer is how do you usually or typically behave or think about this? Maximum performance tests are things like intelligence tests, virtually all neuro-psychological [00:20:00] tests, achievement tests, tests of attention, CPTs, all of that, where the question is not how do you usually think and solve problems?

The question is how well can you do this?

If I want to know what your IQ is, I want to use that IQ to assess your ability to think and solve problems. And if I’m looking at your ability to do that, I need you to do the best you can do. I need your best effort. And that’s why these tests are designated by psychometricians and test measures textbooks as maximum performance measures. How well can you do this?

Ralph rates them in particular, was on an absolute pulpit about that. He had a soapbox about that that was relentless. For 40 years, Ralph preached, if you do not get maximum effort from the examinee, you cannot draw inferences about brain-behavior relationships in these scores. So if I’m going to talk about your intelligence, I need to measure it under circumstances where you gave me your best effort.

To take that back to a 6, 9, 12-hour thing, say what happens with children? They get tired, they get bored, they get resistant that they get to be a lot of things other than trying to deceive. That’s what most of them are trying to do when you get invalid results.

So one of the [00:22:00] things I mentioned back just a few minutes ago when we started this, was that it improves our exams. I want to elaborate on that if I can.

Dr. Sharp: Yeah, please.

Dr. Reynolds: Using PVTs and you need to use multiple PVTs in the course of an exam, by the way. I would bet that you’ve probably covered that in other podcasts. These exams are so long that if you just throw in a PVT somewhere, it hasn’t helped you very much because it assumes the effort was the same across all that time. And particularly with kids when we may even break it up on a different day. You can’t rely on that.

So the best practice recommendations and our white papers from our organizations tell us, you literally need to sprinkle these in over the course of your exam because children’s level of effort in particular, even more so than adults will change during the course of that exam. And we like to believe that we are so good at this, that we know when effort wanes or we know if a child is engaging in this simulation, that kids can’t be that good at it, right?

Well, the research literature says we’re wrong. And I like to think I’m not that good at it too? I don’t even want to try to tally up how many thousands of kids I’ve been saying that over my career since I did my first psych exam in 1975 And for decades, I thought what people want to think. I’m really good at this. I’ll recognize when the child’s not giving me their best effort and [00:24:00] I’ll bring them back to best effort.

The research literature says I’m wrong. I don’t like that. I don’t want to believe that. But at some point, if you’re a strong clinician, if you’re really good at what you do, you have to follow the research and follow the evidence. If you’re going to engage in evidence-based practice, then you have to believe that you’re not as good at this as you think.

Dr. Sharp: That’s hard for a lot of us to swallow. It runs so counterintuitively to maybe the way we come up and we are trusting our intuition and trusting our clinical judgment. And there’s some identity wrapped up in that it’s hard to.

Dr. Reynolds: Absolutely. It’s hard to take that gut-punch, but once you do and you accept that you realize that using performance validity tests, particularly with children will make you a better examiner and it will make your exam results better, more interpretable, and more relatable to whatever diagnosis and intervention strategies you’re aiming towards. If I can, let me talk a little bit about why I think that so, and how that works.

Dr. Sharp: Yeah.

Dr. Reynolds: We have all done a lot of exams. And that’s one of the reasons we think we’re so good at this. From a clinical standpoint, we can trust our gut, but part of what happens with that is that as we move through the exam, kids do get sleepy. They get bored. They start just engaging in the routinized answer and we’re in our routine of giving the test [00:26:00] and everything seems to be cruising along fine. We’re not making any major administration mistakes or anything like that. And that’s very comfortable. We might not recognize that the kid is just going through the motions as long as they appear to be cooperative. And they don’t even know that they’re not really working hard on these problems.

Dr. Sharp: Can I ask you a little detour of a question there? Do we have good research on that in terms of kids’ self-assessment of their own effort or is that getting too granular?

Dr. Reynolds: We don’t have that. We do have some research on kids who have failed performance validity tests in forensic environments. We’re on a follow-up interview. They admitted that they were not giving their best effort and weren’t really trying to do well. But the only place we have that level of granularity is in forensic tests and kids will admit to there.

What I see and what I experienced clinically, and one of the things that led me to build the PVTs that we did is that as I sprinkle those into exams and I find kids often start out doing great. And then their performance goes up and down. Well, if I use PVTs at different points in the exam when a kid fails one, it’s typically not because they’re malingering or engaged in the simulation, which is a purposeful act. It’s that they’re bored or they’ve just started giving routinized attention to it, just enough to make me think they’re still [00:28:00] engaged or whatever.

So when that happens, I have to step back from the exam, figure it out and say, okay, why did that just happen? And the first thing I look at is me. What have I been doing that caused me to lose engagement with this child? It’s easy to lose engagement with them when we’re in the routine of giving a test or asking questions and writing down the answer. There’s an emotional engagement: being animated with kids, being really on top of it, and showing them the energy from you that you want them to give in the exam. And we can wane when we’re doing that too.

There are two kinds of psychologists when you talk to them about that. Ones who admit that they’ve had that experience and those who are really observant enough of themselves in an exam to know that that happens to them because I promise you, it happens to all of us.

Dr. Sharp: Oh, of course.

Dr. Reynolds: And I would bet that the majority of the people listening to this have also had the experience of when they give their 1894 Wechsler that they ask the kid a question or information and all of a sudden in their head, they hear this voice that says, didn’t you already ask that?

So again, when a kid fails an effort test, the first person I look at is me. And I want to know, am I still engaged? Am I showing this child the energy in this exam [00:30:00] that I won’t back from them?Am I still supporting their best effort? Have I been reinforcing them for trying their best? Have I been doing those things to keep this kid to the point of giving me maximum effort so that I can interpret these tests according to the constructs they’re intended to measure knowing that if I don’t get maximum effort, I can’t make that interpretation?

So that’s the first place I go. And I would say more than half the time, what I realize is, I’ve just been sitting back going through the motions and being comfortable with this because nothing was happening to disrupt anything and I didn’t make any mistakes, but the energy level of the exam has come down. And so at that point, I know I have to reengage with this child in a different way. And if that means taking a short break or it means just before I go on to the next task, bringing my energy back to it.

I know that that’s vague but anybody who examines children I think, or has some experience with that understands exactly what I’m talking about as a clinician, bring your energy back in the exam and bring it back to the engagement with the child and also reinforce with them. Tell them, Hey. Man, this really got boring, didn’t it? And you can explain to them what happened. Say, hey, I know that that last test we did, you got pretty good scores on it, but [00:32:00] I really know you can do better with some of these. I saw you on some other tests, really working harder and you got better.

So now when we do this next, here’s what we’re going to do. We’re going to really go after better. You don’t copy those words because this child depends on how you approach that. But when it’s me, I have to re-engage that. If I suspect it’s that the child just become resistant, then I probably do have to take a break and reorient the child to the importance of what we’re doing and link it to some outcome that’s important and get them to reengage and come back here and get them to buy into the exam.

So you just can’t rely upon your gut to know when that’s happening because we get too comfortable. And one of the things I’ve related it to is why I keep my radar detector in my car, even though I don’t drive as fast as I used to. I’ve gotten over it. I’ve gotten a little more sedate behind the wheel although my wife doesn’t necessarily think so. But I used to like to drive. I like to push the limits. But even though I don’t do so much of that anymore, I love my radar detector because it makes me a better driver. Even if I’m not anywhere close to seeing the speed limit, when that radar detector goes off, it calls my attention to the entire context of my driving. I immediately check all my mirrors. [00:34:00] I look ahead. I become more attentive because we also daydream when we drive. We do a lot of things. We’re singing our soul all off the radio. We’re doing whatever. Hopefully, we’re not texting.

We get distracted when we’re driving. We get distracted when we’re examining. And so when that buzzer goes off, it reorients me and makes me more attentive and causes me to check the context of what I’m doing. When a kid fails an effort test, it’s like that buzzer going off and I reorient myself to the exam and then I start looking at, okay, why did that buzzer go off? And what do I need to do so it doesn’t go off again in sense of an effort test?

So it makes me a better examiner, it gets me better results, and it gives me greater confidence that I’m interpreting these tests appropriately. So that’s another key reason that we have to use multiple effort tests and sprinkle them in over the course of an exam because you don’t really know when that is going to happen.

Dr. Sharp: Right. I want to get into some details about the frequency of the test, when and what to do with them. Before we do that, I just want to make something explicit that’s maybe thus far been implicit that there [00:36:00] are situations where kids can give suboptimal effort that isn’t necessarily deliberately misleading if that makes sense. Am I understanding you, right?

Dr. Reynolds: But yeah, my experience is that’s the majority of the time that they give suboptimal effort. It’s for reasons of, are they in purposeful distraction. It’s for those reasons I mentioned. It’s boredom, fatigue. They simply disengaged with your rapport and they’re just now almost robotic-like responding to get through this. And maybe it is purposeful if their goal is to just get this done as quickly as they can and get back to the playground.

Dr. Sharp: Right.

Dr. Reynolds: It’s not because they’re malingering.

Dr. Sharp: Right. I think that’s an important distinction.

Dr. Reynolds: There are lots of reasons why kids fail effort tests and aren’t giving the effort that’s necessary. Only one of those reasons is malingering. And outside of forensic exams, it’s one of the least likely explanations when kids fail effort tests.

Dr. Sharp: Yes. Do you have research on the more likely explanations for why kids fail effort tests outside forensic exams?

Dr. Reynolds: I really don’t. At this point, it’s very difficult to gather that information because one, you don’t have access to what is basically more, I use the word routine clinical exams because there should never be any such thing,[00:38:00] but that’s how we think about it. It is in commonplace typical clinical exams, we still have access.

What I have for that, and I’ll readily admit to it is my clinical experience with kids and in the context of my […] exams and my discussions with colleagues. When I talked to people about this who see a lot of kids, and when I talk to them one on one and have conversations with them about this, you can almost see the little light bulb go off. And they’re like, yeah, I do that. Yeah, I’ve seen that happen. And I hate that I don’t have actual real data to point to that, but I don’t. But it is one of those things that I have clearly experienced and I know other people have too. And it’s a real phenomenon if you work with kids.

Dr. Sharp: Oh. Sure. When I do interviews with kids, I’ll always ask them in some way what their effort was like, just a subjective descriptive question. We do get PVTs as well, but I’m always curious how they would describe it. And it is very rare that kids to their credit, most of them will say, eh, I was like a 7/10 and then I’m like, well, what was that like? Was it like 10/10 for a while, then it dipped for a while, then it came back up, or was it just like7/10 the whole time? So, I think kids know when they’re not getting it all.

Dr. Reynolds: And they know if you call their attention to it. I don’t think they always know.

[00:40:00] Dr. Sharp: So maybe we could get into some details around because this is where I think people, at least in my experience get stuck. I think the research says the majority of neuropsychologists are doing PVTs and their evaluations, but then there are levels to it, right? Like, are we doing one at the beginning? Are we doing the same one later in the day? Are we doing different ones later in the day? I’d love to talk about just the frequency and the type and how we actually put this into practice if we need to be sprinkling them in? What does that mean?

Let’s take a break to hear from our featured partner.

Introducing Blueprint, the all-in-one measurement-based care platform that helps behavioral health providers grow top-line revenue, increase clinician satisfaction, and deliver more effective care.

At Blueprint, they believe that nothing should get in the way of delivering the highest quality mental health care to your clients. Not a lack of finances, clinicians, time, or technology. That’s why they’ve developed an end-to-end solution that works with your clinical intuition to engage clients, reduce unnecessary burden and optimize outcomes. The Blueprint platform, administered scores and charts hundreds of symptom rating scales to give you deeper insights into how clients are progressing. And it provides objective data that enable more productive and effective payer negotiations. That means getting paid by insurance. Learn more or request your demo at bph.link/testingpsychologist

All right, let’s get back to the podcast.

Dr. Reynolds: Well, we certainly do need to be responsive. The guidance that I paid from having read the literature on this with kids in particular, and there’s [00:42:00] less literature with kids than is with adults is that if you’re using tests with embedded effort tests, you should use at least two freestanding effort tests in the context of an exam with the child. If you don’t have embedded measures, then you need to use at least a minimum of three.

The reason for that is, and the reason why you need a freestanding effort test is that the literature is very clear that free-standing effort tests for whatever reason, and the reason why, what I’m about to say is proven speculative, but they performed better. The freestanding effort tests are better at detecting not optimal effort or sub-optimal effort than embedded measures are. And we’re not entirely sure except that in part, most embedded measures were not designed just for the purpose of detecting sub-optimal effort. They do double duty and typically will contribute to the index scores or whatever composite we’re looking at.

And there is a rule in psychometrics that says that a test designed to measure something specifically, typically measures is better than a test that was designed to measure something else. So free-standing effort measures follow that rule. They were specifically designed for this purpose and I think that’s why they do perform better. They have better sensitivity and specificity pretty much across the board than do embedded measures.

Embedded measures were better than that[00:44:00] but we should give them periodically. There is no real guidance, should it be an hour into it or two hours into it? I typically like to do them. If I notice anything that bothers me, I’ll just do one because I can flow right in. If I’m not noticing anything, I tend to use them as the next task after a particularly long test. So clinically that’s been my preference. So that I think gives me a good opportunity to see where we are in the context of the child’s effort at that particular point in time.

And we just don’t have empirical data that the exact times of exhaustion, and I don’t know that we could ever have that because we don’t have standard exams. I’m sure you have a lot of experience in examining children referred for a specific problem. So do I. I would bet a lot that you and I won’t do the same exam. And so, how would we standardize and develop empirical guidance on what is the precise point of exhaustion for the effort test?

We don’t have that with the adults either for that same reason. We’re not all doing the same exam, even with highly similar individuals with the same referral patients. That’s actually one of the issues of our profession. Most medical doctors will come darn close to doing exactly the same exam that when you come in with a particular set of tests. [00:46:00] Psychologists, no. And in part, that’s goes to great diversity and training programs which don’t occur in medicine. Doctors know what other doctors know. You can’t say that about psychologists.

Dr. Sharp: Such a good point.

Dr. Reynolds: We have much wider diversity in what we do but because we have a need and needs are not a word I use a lot, but I do see this as a need to do the performance validity test and effort testing at multiple points in the exam, it brings us around the issue of base rates. And one of the things before base rates, I would just say, don’t do the same effort test twice during the same exam.

That saying that we don’t have good data on that, but use a different one. When you do more than one, you have ten base rates for the combination of effort tests that you gave. That’s another serious problem with taking adult measures, which are all single task freestanding. We don’t know the relationship among them with normal groups, much less clinical groups. So if you pass one and fail another one, what does that tell us?

Well, we’re not sure about the overall context for the exam, what it tells us because we don’t know the base rates there, but with children, we do have the pediatric performance validity test which is mine, as everybody knows that I have an [00:48:00] interest in that. And that’s the one my team and I developed. It has five independently developed exam performance validity measures that are co-normed and on which we have base rate data that we provide as part of the computerized scoring report for every possible combination of 1-5 of those.

So if you only give one, obviously the base rates are easy, but you can choose any two of those to give. And if the kid passes one fails the other, we will give you the base rates for that both in a non-referred sample and in non-separate clinical samples. And we give you the base rate depending on which one they fail, which one they passed, or if they failed 2/2, we give you the base rates for all of that. Let’s suppose it’s a situation where you gave 3 or 4 or where you gave all 5 during the course of the two-day exam. We give you the base rates of that child’s specific pass-fail pattern for those specific measures for a non-referred sample measure and for a non-separate sample and having those base rates is very informative about the totality of your exam.

Dr. Sharp: I think you’re going where my question is going to go. So we’ll see here. I would love for you to just articulate the importance of base rates. Why is it important for us to know that versus just the semi-objective data of, okay, they passed this one in the morning, [00:50:00] they just failed this other one. Well, that probably means their effort is waning. Why do we need base rates beyond that to interpret the data?

Dr. Reynolds: Well, one of the reasons is that you need to know whether or not it’s common. If you give five effort tests, would a half percentage of kids, even though they were doing their best still fail at least one of them?

Dr. Sharp: Great question.

Dr. Reynolds: If you don’t know the answer to that, how do you interpret data? It’s the same in some ways as looking at base rate data for interpreting, for example, verbal and nonverbal IQ discrepancies. There was a time when there was very little data on that in the literature.

Alan Kaufman did a survey of clinicians and asked them what they thought the base rates were of differences in verbal and non-verbal IQ tests. And it was very interesting that among the survey population of clinical and school psychologists, all of whom gave these tests routinely, their answer was that the average difference was about 4 or 5 points. It turns out the actual data in the standardization samples says the average is almost 10 and that about 1 in 4 people have a 15 point difference. That’s the base rate. So people were greatly [00:52:00] over-interpreting what are routine findings in normal populations as having great significance in clinical populations.

We want to avoid that overinterpretation just like we want to avoid under-interpretation. So if I have a child who’s referred, for example, for intellectual disability, and over the course of my exam, let’s say I administer five effort tests and they fail two, what does that mean? Well, one of the first things I have to ask myself is if you give these five effort tests to a person who has a mild intellectual disability, how likely is it that giving their best effort they’ll still fail two. Without base rate data, you cannot answer that question. Same thing with the TBI referral. How often does this happen?

And by the way, it’s an interesting finding that we see in the child literature that’s consistent with findings among adults. On the effort tests, persons referred for evaluations for mild TBI have a higher base rate of failure than people with moderate and severe TBI which is a fascinating finding. And of course, we don’t truly know why, but I think we all suspect the common reasons.

Dr. Sharp: Yes. I was going to say, that’s actually not surprising to me given what I know about the subject. I’m not an [00:54:00] expert but.

Dr. Reynolds: And it seems to be primarily related to secondary gain, but it’s interesting that it’s certainly counterintuitive to non-clinicians.

Dr. Sharp: Right. It doesn’t seem to make sense on the surface.

Dr. Reynolds: But it’s another one of those areas where if you don’t actually have base rate data for pass-fail rates on specific multiple effort tests, how can you hope to make sense out of the combination of those?

Dr. Sharp: That’s a great question.

Dr. Reynolds: Again, it’s another reason for not just taking adult measures and giving them to the kids where you don’t know the base rates of pass-fail across the collection of the adult measures that you’ve decided to use. You don’t know the base rates. And I just think it’s absolutely critical to have access to those base rates and not just in the general population, but the base rates for those combinations of tests and that combination of pass-fail rates among clinical populations because that is what we really do.

Dr. Sharp: Right. It just gets back to that point that we brought up a little bit ago about clinicians trusting their gut. It’s just like levels of how much are we trusting our gut. And it gets more and more complex as we administer more tests.

Dr. Reynolds: It does. And some of the things I’ve talked about so far are things where I do have to trust my gut. For example, like when to get one during the exam, I know how to use them. When I use them, I have to trust my gut because I don’t have empirical guidance. [00:56:00] But the hallmark of evidence-based practice is using the evidence where it exists and not ignoring it and using our gut in the face of evidence that says, no, do it this way.

Where we don’t have guidance, our gut is all we have. In the circumstance now of effort testing with children, we have a lot of evidence, well, not as much as we have with adults, but we have a lot of evidence. That evidence has been reviewed and the conclusion is, you know guys, if you’re not doing this, you’re not up to the recommended standards of best practice, because the evidence says it is useful and your guts are not as good as you think.

Dr. Sharp: Yeah, I’m right with you. This is interesting. I don’t know when the timing of episodes is going to be, but there is another episode coming out soon with Stephanie Nelson around cognitive bias in decision-making. And I think it will dovetail well with some of these things we’re talking about.

Dr. Reynolds: Well, and just as a recommendation for future podcasts and also folks who listen to this for something that they want to go read, take a look at the papers, in particular that David Faust as authored or coauthored on clinical decision making and all of the logical fallacies and cognitive biases that we engage in. Faust is probably the best thinker on that issue and has coalesced more [00:58:00] of that research than anybody I know.

He might be also a great follow-up for some podcasts for you to Stephanie and some of the things that I’ve said. I know she is really good with that too. So I would encourage your folks if they ever listen to that to listen to Stephanie too. Faust has really devoted the bulk of his career to that issue. And he is one of the best thinkers in the field of psychology.

Dr. Sharp: Nice. I’ll put those resources in the show notes. People can check that out. But as we start to wrap up here, I know there’s so much more that we could say about this topic, but I’m curious is there, since we’re ending on this note of following the research and knowing what we know, are there other aspects of this research that you would like to make sure and highlight that we have not touched on yet as far as PVTs with kids?

Dr. Reynolds: Well, just that kids also can malinger. We didn’t really talk about kids who are malingering. It is a purposeful act for secondary gain. And that’s really where this came from. That’s how it got started was detecting people who were purposely distorting their responses for secondary gain. A lot of clinicians felt like children really couldn’t do that. Or if they did try to do it, they were terrible at it. And one of the things we review.

I actually have some lengthier training on our materials with this. One of the things we review of that is all the [01:00:00] research that shows that not only can children engage in deception, but many of them are really good at it and go undetected. Our ability to detect them is slightly better than flipping a coin. If we don’t use objective methods, the average detection rate of that in the research on it is about 55%, which is significantly better than chance because of the power of the studies. But in terms of clinical significance, it sucks.

You’re only right about 55% of the time. So we need to use objective measures. And that’s been recognized now by our professional associations who say that quantitative measures of effort are clearly the best way to do this. So follow the research. And again, follow the research where there is research and that’s what makes you evidence-based. If you don’t have an evidence-based for something, that doesn’t mean you can’t do it or shouldn’t do it. There are lots of things we do where we don’t have sufficient data, but also recognize that you don’t have sufficient data to guide you on that. And then you are using your gut. And because you’re using your guts, the probability that your wrong is higher than it is when you have quantitative data.

Recognize that when you talk about or think about your own level of confidence and your decision-making because I’ll bet you that Stephanie points out that it’s still clinicians who [01:02:00] express the extreme degrees of confidence in their decision-making, who are the least likely to be right. That’s just how it works. We see that in forensic settings. We see it in routine clinical settings.

The people who tell us they are the most confident that they’re right are the ones who are most likely to be wrong. And that has to do with those company biases, particularly things like premature closure or diagnosis that you make up your mind in the first few minutes and then you started engaging in premature closure at the end, then you immediately experienced confirmation bias on that, and all the other things go into that. So that closes us out in terms of our ability to remain flexible and change our mind in the face of data that says Nope, that’s not it. So I would always keep that in mind and that’s true of efforts tests.

Dr. Sharp: Yes. 

Dr. Reynolds: Kids can, some of them are boring, and the best way to detect them use them quantitatively. But as we talked about earlier, it’s also important to remember that in routine clinical exams, the major reason they failed probably is not malingering. It probably has to do with the context of the exam and our interaction with the examinee and what’s happening right there. And we can cure that and do better exams by having quantitative looks periodically that the child’s effort.

[01:04:00] Dr. Sharp: I like that. I just really want to reinforce this whole theme that you’ve been touching on through the interview that it really does help us make better decisions. It makes us better examiners. The data’s important, right? We want to make the best clinical decision possible. And even beyond that, it’s just nice to know if we’ve lost touch with our kids and we need to reconnect and re-engage them.

Dr. Reynolds: That’s right. It really does. Since I’ve started using them routinely, at least I’ve convinced myself that it makes me a better examiner.

Dr. Sharp: Yeah. Well, your radar detector analogy is a really good one. I’m going to keep that in my mind for a long time, I think. It’s like a mindfulness automobile or something, an alarm that goes off.

Dr. Reynolds: Some people practice mindfulness. They actually hit a goal to bring them back to where they should be.

Dr. Sharp: Exactly.

Dr. Reynolds: We need a prompt.

Dr. Sharp: Yes. Well, like I said, I know that we could talk about this topic and many others for a long time but we will bring this to a close. There’ll be lots of resources in the show notes as always. But thank you so much for the time and for talking through a topic that is clearly important to you.

Dr. Reynolds: Thank you. And it was very interesting. You did a good job with the questions. So thank you.

Dr. Sharp: I appreciate it. Well, hopefully, our paths will cross again soon.

Dr. Reynolds: Yeah, they probably would.

Dr. Sharp: Take care, Cecil.

Dr. Reynolds: All right. Bye-bye.

Dr. Sharp: Thanks for listening everyone. I really appreciate it. There are some links in the show notes. You can definitely find Cecil’s measure if you would like [01:06:00] to check that out. You can find two other resources that we mentioned as well.

If you are a group practice owner or a hopeful group practice owner, I would invite you to check out The Testing Psychologist Advanced practice mastermind. We’re always recruiting for all levels of the mastermind, but the advanced practice group is moving along and there are a few folks interested. So it looks like that next cohort is going to be starting probably in May. So if you are an advanced practice owner, you’ve got a group going on, you’d like some support and help and growing your practice, with building that vision, managing your employees, streamlining systems, that sort of thing, I would love to chat with you. You can get more info at thetestingpsychologist.com/advanced and schedule a pre-group call.

All right, thanks again y’all. I will talk to you next time.

The information contained in this podcast and on The Testing Psychologist website is intended for informational and educational purposes only. Nothing in this podcast or on the website is intended to be a substitute for professional, psychological, psychiatric, or medical advice, diagnosis, or treatment.

Please note that no doctor-patient relationship is formed here, and similarly, no supervisory or consultative relationship is formed between the host or guests of this podcast and listeners of this podcast. If you need the qualified advice of any mental health practitioner or medical provider, please seek one in your area.[01:08:00] Similarly, if you need supervision on clinical matters, please find a supervisor with expertise that fits your needs.

Click here to listen instead!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.