139 Transcript

Dr. Jeremy Sharp Transcripts Leave a Comment

[00:00:00] Dr. Sharp: Hello, everyone. Welcome to The Testing Psychologist podcast, the podcast where we talk all about the business and practice of psychological and neuropsychological assessment. I’m your host, Dr. Jeremy Sharp, licensed psychologist, group practice owner, and private practice coach.

All right, y’all, here we are back with another interview episode today. I’m happy to welcome back Dr. Jordan Wright to the podcast. If you didn’t catch Jordan back in episode 113, he was talking all about the APA guidance document on conducting psychological teleassessment, which he co-authored, but today, Jordan is here talking about another one of his specialties, which is assessment supervision. 

I know that a lot of you have asked about CEs and other resources for supervising others who are doing assessments. I think you’re going to take a lot away from this podcast. Jordan, a man of many talents has coauthored books on psychological assessment as well. He is the editor of Essentials of Psychological Assessment Supervision. He lays out his process for doing that and really outlines his assessment-based model for assessment supervision. It’s fascinating.

We talked through a lot of different components of this model including the different touchpoints that he will look to during assessment supervision, balancing that Sage on the stage with a guide on the side style. We talked about procrastination when it comes to report writing and many other things. So there’s plenty to take away from this episode. I think that y’all will enjoy it.

A little bit about Jordan for anyone who didn’t catch it last time. He is a core faculty member in the Counseling Psychology Ph.D. program at New York University(NYU), where he also directs the Center for Counseling and Community Wellbeing, which is NYU’s training clinic. He is the author of Conducting Psychological Assessment: A Guide for Practitioners, co-author of the 6th edition of the Handbook of Psychological Assessment, and editor of Essentials of Psychological Assessment Supervision. Among other topics, he has conducted research in the adaptation of performance-based tests to remote, online administration platforms, and he recently worked, as I said, to produce APA’s guidance document on psychological tele-assessment with some other colleagues. So Jordan has a wealth of knowledge to share with us. I think this is a good one. So stick around.

Now, if you are in the advanced stages of your practice, or maybe just past the beginning stages, you might want to check out the Advanced Practice Mastermind group that is starting in September. This is a group coaching experience. I am the facilitator. You’ll be in a group with no more than 6 psychologists who are moving past that beginning stage looking to the more complex dynamics of running a testing practice. We talk a lot about hiring, streamlining your systems, getting more efficient, multiple streams of income, and so on and so forth- all those issues that really come up when you get past that beginning stage.

The mastermind is really powerful. It’s going into, I think the 3rd or 4th cohort now since I got them started a few years ago and they’re pretty amazing. So if you are interested in a group coaching experience where you get some accountability from other practitioners to move your practice forward, I would love to talk with you. You can go to thetestingpsychologist.com/advanced and get more information. You can also book a pre-group call there where we can talk about whether it would be a good fit or not.

Okay. Let’s get to my conversation with Dr. [00:04:00] Jordan Wright all about assessment supervision.

Hey, welcome back to The Testing Psychologist podcast. Today, I have Dr. Jordan Wright again. Jordan, you are joining an exclusive club of repeat guests. I’m glad to have you back. Welcome.

Dr. Jordan: Thanks for having me. Happy to be here.

Dr. Sharp: Yeah. We’re going to talk all about assessment supervision which is a hot topic in our Facebook group that comes up a lot and at the same time, there’s not a whole lot of info out there. I definitely didn’t get any class on assessment supervision in grad school. So, I’m really thankful to be able to chat with you. You’ve written a book. That’s your thing these days- one of your things. 

Dr. Jordan: Thanks.

Dr. Sharp: Let’s dive into it. You, in your book, have really come up with an assessment-based model of assessment supervision. I wonder if we might just dive in and start from the top? Maybe just talk about why you wrote this book and then jump into the actual model.

Dr. Jordan: Sure. It seems like a no-brainer. We should be doing assessment when we’re supervising assessment, right? I decided to do this book mostly because the literature is very thin when it comes to assessment supervision. There are a few models out there. There’s not much.

When I was going to the Society for Personality Assessment conferences, we were talking a lot about assessment supervision. We were talking a lot about skills and competency in assessment supervision, but nobody had a lot of [00:06:00] models. We had funded a small research study to survey and scour the literature and look at people’s experiences in assessment supervision, but there’s really just not much out there. There are a few models of developmental models of assessment supervision, but none of them had really been scrutinized empirically. They haven’t done much with these.

So I pitched it wildly to the publisher. Specifically, I wanted to edit this book. I didn’t want to write it necessarily. I wanted some of the experts in the field to contribute chapters. I wanted Steven Finn to write about therapeutic assessment supervision. I wanted Bornstein to talk about personality assessment supervision. I contributed I think 4 or 5 chapters to it including the opening model- the framework, but that’s really what got me into it. I wanted to pull together the smartest people in the field right now when it comes to supervision, those master supervisors.

One of the sessions that we had slated for this year’s SPA conference, which of course did not happen, was an actual experiential exercise in supervision. We had a student who was going to come in with data and be supervised by two different master supervisors in front of the audience. We had recruited two supervisors and they were blind to each other. So they were going to leave the room when the other one was supervising. And we were going to see how divergent they were, and what overlapped with the two of them in actual practice, not just theory, but we wanted to see it in action. I’m hoping that we can revisit that in the future when we get to actually have a conference again for SPA. 

Dr. Sharp: That’s incredibly valuable. That’s such a great idea.

Dr. Jordan: Yeah. I’m always a big fan of this fishbowl [00:08:00] experiential let’s see it in action type of workshops. So, I was slated to be the discussing. So I was going to watch and pull together what I saw overlapped, what agreed with the models that are out there of assessment supervision, where they diverged, and where we may need to continue this work.

I think the main reason I wrote this book is because it’s a starting point more than anything. It’s not definitive. It’s an essentials book so it’s very user-friendly, but it’s not definitive at all. We really don’t have that much to go on in terms of empirical support of what works, and what doesn’t. So that was the basic premise of the book.

What I did with the first chapter was really spell out an assessment-based assessment supervision. I can’t say that three times fast, right? So using our own assessment of a supervisee throughout the assessment process, throughout the process of the student or the colleague doing an assessment with an actual client, we need to be constantly assessing what is going on with them. 

Part of it came from also a debate in the literature around of all places, online education. There’s a fairly old debate about, it started in online education, it’s blown up from there, to decide what is more effective in education, being a “Sage on a stage” or a guide on the side. Pretty typically when we’re doing clinical supervision, we’re a guide on the side, right? We’re not lecturing. We’re not teaching directly. We’re not imparting our wisdom all that much. So we hepper it in there. But we’re mostly guiding them. We want our [00:10:00] supervisees to find their own voice as a therapist. So we are more a guide on the side than a Sage on the stage.

I think in assessment supervision, we really need to recalibrate this. There are moments in assessment supervision where we absolutely need to be the Sage on the stage. We absolutely need to be didactic, lecturery, teachy, preachy, whatever it is we need. We have to be there because in assessment, I’m preaching to the choir here, in assessment, things are less reparable, right?

If I make a mistake in an assessment, it’s much harder to repair it than in an ongoing counseling relationship. We can fix a rupture in a therapeutic alliance. When I’m doing ongoing counseling, if I make a mistake, if I give a wildly inaccurate WISC or WAIS or something like that, it’s really hard to go back and fix it. So, there are moments in assessment supervision where I, as a supervisor, need to be much more directive, much more pointed in my advice or step in and take over even that moment. So I need to be that Sage on the stage.

At the same time, we do still want people to find their own voice as an assessor. We still want them to be able to give feedback in their own empathic way, to do their clinical interviewing in a way that fits their personality. They certainly don’t want to talk like me if they don’t naturally talk like me. They don’t want to say things I would. I tend to be not super therapisty when I’m interviewing, but if they want to rely on more professionalism, we want to encourage that and help them find their own voice. So I need to be a guide on the side some of the time, some of the time I need to be more of a Sage on the stage. And that’s where this model came from.

This model is all about taking every point throughout an assessment, assessing where the student or supervisee is in terms of their competence, and then making a conscious decision, am I going to let them [00:12:00] run with it and support them and maybe molds them in a certain direction? Or am I going to stop them and teach them? Am I going to stop them, show them how to do something, maybe practice with them, demonstrate how to give a test? Am I going to really stop the process and be that more Sage on the stage? We need to be doing this throughout every aspect of assessment.

When we look at across all of the competency documents and assessment. I did this recently for a paper, I surveyed all the major competency documents. A lot of organizations that put out competency documents there, the APA accreditation as one, NCSPP has one, a lot of these organizations, and then people have written their own and publish their own competency documents, especially when it comes to assessment. 

I tried to synthesize and pull out what are common and overarching across those competency documents and found seven different components that are really overarching. Basically, I can remember all of these. They start with foundational knowledge. These includes psychometrics, includes ethics and legal and professional issues, that kind of stuff. All of those. Theory as well. We need to really understand the theory behind cognition. If we’re giving an IQ test, we want to know, is it a CHC model? Is it a CGCS model? We need to know the theory behind it. 

The second area is diversity in context. And this is an area that is sorely lacking in a lot of the literature. We’re starting to get more written on diversity-sensitive assessments. There’s a good book, diversity sensitive personality assessment that just came out. There was a great book by Joni Mihura and Rebecca Krishnamurthy, I believe on assessing gender [00:14:00] minority women, LGBT individuals. More and more is coming out around that, but I think our instruments are not as good at that. And I think a lot of my supervisees are not as consciously aware going in of needing to address diversity and context and culture at every point in the process. I find myself needing to point them to resources. That’s more Sage on the stage, right? I’m needing to point them to more resources more often around that competency.

The third one is relationship- just building relationships. This one, I will say I tend to be a more guide on the side. I think a lot of the supervisees that I work with all the way from my first year students that I’m training in assessment and through prac students, through colleagues that I supervise or consult with. They’ve got this down how to build a relationship, how to build a therapeutic alliance. We know that you’re going to get more valid interview data if you have a positive relationship. And I think people are pretty good at that.

Then we get to the testing stuff. We get to interviews as a primary methodology. Interviews spanning from completely unstructured interviews to semi-structured and fully structured skid or diamonds type interviews are a foundational skill when it comes to psychological assessment. A lot of the research that has looked at how many psychologists are doing assessments I think overestimates the amount of testing that’s out there because they say, how much assessment are you doing, and are you doing assessment? And people say yes, because they do clinical interviews. And they make decisions based on that, which is a little different than what we’re talking about with more comprehensive assessments.

Dr. Sharp: Of course.

Dr. Jordan: Then we get to the actual skills of selecting tests and then [00:16:00] giving tests and interpreting tests. And again, all of these things rely heavily on that foundational knowledge, the diversity knowledge, the understanding context. And this is often obviously where we need to be a little bit more directive, I think.

When I do supervision with colleagues when they come to me and ask me to supervise a couple of cases, just to learn my models or whatever, they’re often very wedded to certain instruments. We get wedded to these instruments. We become militant about some of these instruments. We then tend to I think overestimate our understanding of these instruments. I think the more and more we use them, we feel like, oh, I’m more comfortable with it. I know how to administer it. Great. 

I totally get it. I understand what this instrument is entirely saying, but actually, I think the more and more we use it, the further we get away from the actual literature base, and the more we go into reinforcing our own habits, some of which are probably really great, and some of which may not be so great or relying entirely on a publisher’s reports.

I often think about, for example, the MCMI. I really like the MCMI. I use the MCMI when it comes to understanding personality and especially personality pathology. But when I read those reports, they are so full of more psychoanalytic jargon. If I think about somebody who’s puerile or has intra-psychic architecture of a certain type or that kind of stuff, I feel like I get it, but I actually need to go back and reread exactly what those mean each time that I give it. More and more, I have my students do that. When they’re giving them an MCMI, I’m like go to the essentials book, go to the handbook of psychological assessment and look up exactly[00:18:00] what this means. Let’s not assume we remember or understand.

Dr. Sharp: Sure. I could interject a little bit, I don’t know, have you read that book Range?

Dr. Jordan: No. 

Dr. Sharp: It’s called Range. It’s really interesting that speaks to, I think exactly what you’re getting at. His whole argument is that the expert in model is flawed in the sense that folks who have a range of experience in different expertise, are actually more competent in their fields. And the longer someone sticks within their lane, the worst they get at what they do. So, I think it dovetails well with what you’re saying. And it has been born out in research and in a number of fields.

Dr. Jordan:  Yeah. And I think it’s an interesting point when it comes to the intergenerational transmission of assessment knowledge as well, because our assessment professors are teaching how they learn. A lot of them are experts in a certain type of assessment. And I think a lot of programs are teachings somewhat dated models of assessment, which I’m going to come back to. 

Dr. Sharp: Okay. I got to say, it was terrifying for me the first time that I disagreed with my advisor’s philosophy when I went out and private practice and I was like, wait a second, I don’t think this is right. Can this not be right? And J had to go through this whole process of finding my voice or whatever.

Dr. Jordan: It rocks your world, right?

Dr. Sharp: Yeah.

Dr. Jordan: Is anything I learned right? I’m not kidding, everything now.

Really quickly the final competency is communication- report writing, feedback, and that kind of stuff. And this is again where we need to assess how is this supervisee trained, how comfortable are they, how competent are they at writing? [00:20:00] This becomes an area where it is more repairable, right?

We can do drafts. It’s not unusual for me to go through 10-15 drafts of a report especially the first couple of times they’re doing an assessment with me because I will point out stuff. I’m not going to correct it for them. I’m actually going to put comments in and make them fix it so that they learn it. It’s more that guide on the side. I’m going to take what they’re… I don’t necessarily need it written the way I would write it, but I do need them to take the jargon out. I do need them to make it comprehensible to a lay reader. I do need certain things in report and I’ll point out those moments where it’s not, and it may take them 2 or 3 tries to edit the language to get it to a point where it is entirely clear, comprehensible and all that kind of stuff.

Dr. Sharp: Sure.

Dr. Jordan: The one thing I do want to point out is whenever I’m supervising somebody, this is really mean, and I apologize to everybody out there. Whenever I’m supervising, I make the assumption that they’re terrible. I start out with the assumption that you’re not good at assessment no matter what level you are. If you are a licensed professional who has been doing assessments, if I’m supervising an assessment, the first time we do it together, I’m probably going to be a bit more directive, probably going to make that assumption early on that I need to scrutinize what you’re doing so that I know. And this gets back to the assessment-based model.

I’m going to watch. I am going to look. The first time you are giving a WISC, I’m going to ask for a videotape of you giving that WISC. Record that zoom or whatever. Whatever you’re working on, I’m going to ask to see it. I’m going to double score protocols. And that seems really [00:22:00] basic for advanced clinicians, right?

When somebody comes to me and asks for supervision on a testing case, I’m not going to make any assumptions about their skill. I am going to assess it. So I do need to talk to them about what their training looked like. I do ask for sample reports just to see what they’ve done before. This is a way of assessing how competent they are and their style. I want to know what their personal style is or what their previous supervisor’s personal style is more often so that I can make a determination, how much do I need to jump in here and intervene and how much can I just let them go, let them find their way and really support them and positively reinforced all the good stuff and gently guide a little bit differently the stuff that I think needs a bit of intervention. 

Dr. Sharp: Yeah. I have two things I wanted to touch on just in all that, that you said. First super detailed question that you may or may not know the answer to. If you are videotaping people doing supervision, how are you doing that? Do you know what system you’re using that is HIPAA compliant and you can actually access the videos and so forth?

Dr. Jordan: Oh, so if they’re videotaping their sessions for our supervision?

Dr. Sharp: Uh-huh.

Dr. Jordan: Again, I’m a little spoiled because at my university we have a HIPAA compliant video system built into every single room and we videotape every single interaction. Typically, if I’m supervising a colleague or something, I will ask them to come in and do the actual assessment in our offices for that first one, just so for that purpose, unless they have access to some sort of HIPAA compliant recording mechanism. Ours is HIPAA and FERPA compliant. It’s [00:24:00] one drive or box through my university so we can communicate that way. Even if they have to record an external recording and save it to the box, they can transfer it to me in that way. 

Dr. Sharp: Got you. I know a lot of folks have asked about systems for doing that. The second thing though, I wanted to say is, I don’t want to be mean and assume everybody’s terrible, but I don’t know, I found in my experience, I think the majority of folks, even if they may not say it, they want that feedback. There’s that internal sense of, I’m not really doing this right? To just take the leap and get some concrete feedback, I think is helpful for a lot of people and relieving, honestly because then it’s like, okay, we got this out of the way, now let’s get better at it, right? I don’t know if you’ve found that at all.

Dr. Jordan: Yeah. I think if we go into it with a collaborative growth mindset-oriented perspective, absolutely. And anyone who’s coming to me, especially colleagues who are coming to me for supervision, obviously, they’re open to feedback or I’ll say, wouldn’t be coming to me for a consultation or supervision.

There are in my, how many ever years of doing this, there are a handful of students, usually early students who are a little bit more resistant or defensive or whatever. What I tend to do is point them to some of the research about how bad we are at scoring this.

There are study after study, after study where credentialed certified school psychologists, licensed clinical psychologists, and counseling psychologists make mistakes in their WISCs and how many mistakes are made is shocking. So I point them to that.  I say, look, this is why I’m scrutinizing it. They say it’s not because you yourself are bad at this. It’s because we are all collectively not careful enough. And a second pair of eyes only serves to benefit the [00:26:00] client in the end. 

Dr. Sharp: Yeah, I like that. Kind of taking it off them. It’s not personal. It’s just, this is human nature. This is what we do.

Dr. Jordan: Yeah. Sometimes it’s personal.

Dr. Sharp: That’s right. Nice. Let me see. I did want to ask before we maybe transition to the actual model, you mentioned that you’ve surveyed and compiled all these different competency documents. Is that paper of yours out yet or is there one to rule them all that we could look at to pull all of this together?

Dr. Jordan: It’s submitted for publication. We’ll see.

Dr. Sharp: Okay. Fingers crossed.

Dr. Jordan: We’ll see if it gets accepted. I revised and resubmitted it. Hopefully, it will come out. It’s actually a paper building a model for master’s level competency in assessment. So now that APA has agreed to accredit master’s programs in psychology, we, as a field need to be very clear what are the competencies, especially in assessments that are expected at the master’s level as compared to the doctoral level. That’s why I pulled all the doctoral ones together to make a very clear argument that this is the set of doctoral-level competencies as compared to what I proposed should be the master’s level competencies.

Dr. Sharp: Okay. I’m going to fight hard to resist yours to go into that. 

Dr. Jordan: That’s off topic. 

Dr. Sharp: You’re right. My gosh. That’s a whole other. So we’ll put that away, but do you want to transition to the model? 

Dr. Jordan: Yeah. The model is basically what I presented. So when I do supervision is I do what I call touchpoint supervision and assessment. What I mean by that is there are critical moments throughout the lifespan of a single assessment where a supervisee [00:28:00] needs to touch base with me. So that may not mean weekly supervision. It may not mean an hour a week of this type of supervision. It may mean we front-load supervision and do 3 or 4 hours in the first week, and then wait two weeks while they’re collecting data and then come back again.

I always do a supervision session when someone gets a referral. I’m going to be talking now specifically about students that I supervise. They get the referral, we meet and we strategize. We strategize how to reach out. And at every point, at all of these touchpoints, I’m assessing. I’m assessing how comfortable are you? I often use a very Socratic method and say, okay, let’s think aloud. You start. I put them on the spot.

One of my supervisees today, we had group testing supervision, and she said, I’ve been studying my data because I was afraid you were going to call on me today. I was like, don’t be afraid. Talk it out. Let’s talk it out. So, I let them lead. Our first touchpoint is before they’ve even contacted the client, then they contact the client set up a first…

I have a model where we use a semi-structured clinical interview. We always start with the same 70 structured clinical interview. It’s one that I’ve developed and published in my first book. It includes a lot of contexts. I believe that the clinical interview is meant to get the presenting problem, the history of the presenting problem, and a whole bunch of crap. A whole bunch of context. So I am going to ask really in-depth about developmental history, about medical history, about family stuff. I’m going to ask a lot about culture and diversity. I’ve with a colleague of mine developed a structured cultural [00:30:00] interview that we now build into the clinical interview for assessment so that we get a more robust cultural context.

Dr. Sharp: I wanted to ask, when you say I’m asking a lot about cultural and context, can you just give a few examples of what that looks like when you say a lot? Like, what are you diving into? 

Dr. Jordan: Yeah, sure. This structured interview is built around Pamela Hays’s addressing framework. This is a very common framework in counseling psychology, especially to understand someone from a cultural perspective. ADDRESSING is an acronym that I will not remember all of, but it comes like: age and generational issues and how they’ve affected you, disability status- both acquired disability status and disabilities you were born with and genetic disabilities, race, ethnicity, gender, gender, identity, sexual orientation, all of these things are built into it. And we go through very methodically. I think a lot of clinical interviews, especially the more unstructured ones may miss a few. We all have blind spots. It’s why I like structured interviews. They cover our blind spots. So this is the first touchpoint.

So they have now interviewed and they come back to me for supervision. And at that point during supervision, again, I am assessing them. I am seeing how comfortable they are taking the data that they got from the clinical interview, as well as a mental status evaluation in their behavioral observations, putting that all together to come up with hypotheses of what could be going on for this person.

So for example, someone comes in and is like, I have bad attention. Okay. Our easiest hypothesis is you have ADHD, but we also know attention problems are secondary to pretty much everything [00:32:00] else, right? It’s actually listed as a symptom in depression, anxiety. If you’re working with kids, if their parents are getting divorced, if it’s situational, if it’s contextual, everything gets in their psychosis, everything gets in the way of attention.

So if someone comes in and that’s all you know about them, I have problems with my attention, you should have a lot of hypotheses about what could be going on for that person. And so, I am assessing, and I’m figuring it out as I go, how comfortable is this supervisee coming up with this litany of hypothesis of what could be going on and it’s okay. No matter where they are on the continuum, I’m going to meet them there and decide, am I more on the guide on the side, because they’re doing a really good job coming up with all these hypotheses or if they’re like, I really am not that comfortable with attention. So I’m not really sure. Then I step up and become more of a Sage on the stage. And I either point them to resources. I try not to lecture them, but I probably lecture them. You can tell I’m a talker.

So, this is a touchpoint where I’m assessing how strong they are at coming up with these hypotheses and basically teaching them how to do it. If they’re not very good at it, I’m going to do it in real-time with them to model it, to show them, okay, now you have the presenting problem, it’s history, a lot of context, here’s how we come up with what this might be, what might be going on.

And we can tie this to a theoretical orientation. We can tie it to the DSM. I don’t love tying it to the DSM, but we can tie into the DSM. We can tie it anywhere we want based on who we are and who our identity is as an assessor, but in general, the skill is the same- coming up with hypotheses that then inform selection of tests.

Selection of test is one of those competencies that appears across all of these competency documents, and selection of tests, [00:34:00] we sometimes take for granted. A lot of us use a standard battery of tests for, oh, you have attention problems. I’m going to give you these five tests and it’ll tell me whether or not you have ADHD. Okay. But if the answer is no, you don’t have ADHD, I’m probably going to want to add some more tests to find out what you do have, what is causing the attention problem.

So again, I am assessing the supervisee in the moment, what tests are you familiar with? What do you know about these tests? What do they do? What do they do well? What do they not do well? If they don’t do that well, do we have another test that can cover that? Do they work with this specific population? I will point them to.

I was very, very careful, for example, in the handbook of psychological assessment, every single chapter on a test has a section on use of this test with diverse populations. So I point them to that. I’m like, you need to go and do the research. Now, that book is now old. It’s 2016. So it is somewhat dated. The 6th edition is 2016. It’s somewhat dated. So sometimes I point them to PsycINFO or Google scholar. I’m like, you go do some research and come back to me and tell me with this specific cultural background, is this test effective? Does it have some caveats? Will we need to interpret it a little differently later on, or do we need to not give it at all and choose an alternative test? So this is…

Dr. Sharp: Can I jump…

Dr. Jordan: All right. 

Dr. Sharp: Yeah, I just wanted to… I had two questions with this assessment piece. This is a core component of the model, right? So are you literally assessing them with a measure of some sort or is this a qualitative gut? And then the second part of that is do you communicate the results of your assessment to them in the moment or is that happening at the end of the semester? How’s that work?

Dr. Jordan: Yeah. So, that would be the ultimate [00:36:00] goal is to build a little tool for each of these touchpoints. I would love to do that. I don’t do that yet. What I try and do is I try and be mixed methods about it. So I try and look for any objective data and my own clinical impression of where they are. 

I’m very transparent about this model. I tell them upfront that I’m going to be doing this all along, especially when I supervise my students who I have for a year and they’re doing 15, 20, 25 assessments, it’s great for them to see where they’ve grown. It’s great for them to hear, oh, now you select the tests. You tell me. You’re better at this now. But right now it’s more informal. I am doing it more informally.

So I’m trying to look for concrete data. How many tests can you come up with? That’s a concrete measure. If we have three hypotheses, what are the tests that can rule out? Can you figure out tests that can rule out each of these three hypotheses or only two hypotheses or only one of them that you know, tests for? So that’s an objective measure. And then, I obviously look at their confidence level and I ask them how comfortable are you? And I get their self-report around it. I try not to take too much time with this because we have to obviously get to the content of supervision and not just the process, but  I try to be deliberate about it at each touch point. 

Dr. Sharp: Okay, great. Thanks for that. I like to get very concrete with this. 

Dr. Jordan: Yeah. My next big goal in my career is to come up with, you know, it doesn’t even need to be sophisticated measures competency at these areas. It could just be a little rating forum that pulls together one piece of objective data, one piece of clinical impression, and then guides the supervisor on how directive to be at that moment. 

Dr. Sharp: Yeah. It just got me [00:38:00] thinking this on my mind because we’re just wrapping up a year of APA internship. We have an intern here and we have to do those final evaluations. It’s a Likert scale and it’s on the APA competencies and whatnot, but it’s really hard. The conversation was like, well, yeah, I think a 4, 5, I don’t know, just quantifying the stuff or operationalizing as much as we can.

Dr. Jordan: Yeah. I find those summative evaluations a little bit tough because the benchmark is, are they ready for independent practice? And I find so many licensed psychologists are not ready with tests independently. So do I err on the side of honesty and be like, well, no, but that’s okay because most people aren’t or do I pat it a little bit just because they’re on par with everybody else who’s graduating who’s also not yet ready to practice independently? 

Dr. Sharp: Right. That’s such a good question. So we were talking about test selection I think that being a touchpoint. Where does it go from there?

Dr. Jordan: From there, it really goes to the testing. And this is where probably I see the most variability in supervision because it depends on how familiar they are with specific tests. If we decide on a test and they don’t know how to give it, I need to be on that stage teaching them how to do it. I need to maybe give two extra hours of supervision early on. That’s what I mean by padding it early on, because I want to make sure that when they get into that room with a client, they are fully competent to give a measure. If not fully competent, I want them to at least be competent to give that measure.

There are some exceptions. In my training, for example, I train my students on the ABAS and I often try and give them cases early [00:40:00] on that the ABAS has is not super important. It doesn’t hinge. Nothing diagnostically is going to hinge on the ABAS. I have plenty of other data because I expect them for a year or two years not to give a fully valid ABAS. It’s just so hard.

Same with the ADOS, I won’t take a student who has not trained on the ADOS for a year and get them ready to give an ADOS for real. I just don’t think it’s realistic. I think they need so much training, so much practice. It’s just not realistic. When you’re giving an ADOS, it hinges on the ADOS, right? Diagnosis often hinges on that. So, if they have any reasonable expectation that they’re not going to give it validly, whatever the measure is, I’m okay with it as long as nothing’s going to hinge on that.

Sometimes, I’ll add an extra measure just for training purposes, even though it’s completely unnecessary for an assessment. Again, I’m part of a training program. I have the flexibility and freedom to do that, or little feet knowing no one’s paying extra for that. So, I like to train them in that way and then I get those videos and I get to watch them then.

So again, this is probably the area where we, as supervisors are most familiar with assessing our students, is administration. Can you administer and score this accurately? We’re usually pretty good at saying, oh, you don’t know how to give block design or like, oh, you’re doing that all wrong. You forgot to reverse. We’re pretty good at that. And I think we need to extrapolate that skill out to the rest of the assessment process. So then they go and they test.

Different students will take a different process. If they’re spreading it out, [00:42:00] then I may ask them to come in and meet with me to double score protocols. I try and do that, especially at the beginning of their training with them. I go through it while they’re sitting there. I talk out loud about my thought process as I’m going through about how I check for mistakes, how I add all of the ones, but then I add all the zeros and subtract it from the total possible and find these ways to make sure that they don’t make a mistake in calculation because they don’t do this. Supervisees students don’t do this.

So if they’re doing it spread out, if they’re not, sometimes they like to knock it out in like one Saturday and they’ll just do all the testing. And then that makes an easy next touchpoint because they come in, they meet with me, and again, I get them for a year. So I want to see them get better and better and better with the administration, especially with the coding and scoring of these tests as the training year goes on. I don’t expect much the first time. I expect every WISC or WAIS or Woodcock-Johnson to have at least an error that puts them out of whack.

It’s amazing. Student supervisees will come in with a protocol that’s fully scored and in the IDD range, in the intellectual developmental disability range because they forgot to add all these ones before basal. They just forgot to add all those.

Dr. Sharp: So many times have I seen that. Yeah. 

Dr. Jordan: So you need to do it. You need to scrutinize it. I do not want to diagnose a kid as IDD because of that mistake. Can you imagine? That’d be hard. 

Dr. Sharp: Yeah, that’s a nightmare, Jeez, but it just goes to show those things. I think the longer that we do it, the more we assume that others know how to do it. That’s just a foregone conclusion that, of course, you add starting from the [00:44:00] beginning, but no.

Dr. Jordan: Yeah. I want to come back to that point, the idea that we make these assumptions that people are doing this, our supervisees just get it. I want to come back to that when we talk about using some guidelines documents in supervision.

Dr. Sharp: Okay.

Dr. Jordan: So then, the next touchpoint is in the interpretation of tests. Now, we get into this standard stuff. So I want to see that they can interpret each test accurately. I want to see that the data are valid. They’re interpreting them in a valid way, in a way that’s aligned with the literature. I tend to be more on an evidence-based assessment model. We’re not doing a lot of projectives. We’re not doing a lot of, I’m not saying there’s not value in them, but there’s not a strong evidence-based behind most projective, so we just don’t use them.

So I want to know that they are interpreting tests in alignment with the literature, not just with the clinical manuals. So I want to know that they’re incorporating diversity. I want to know that they’re understanding what variations mean. I want to know that they understand the difference between a full-scale IQ and how robust that is and how predictive it is versus all the index scores and some problems with those. I want to know this stuff. I want to know where they are.

So I let them do it in front of me and I let them fall. I let them falter. I let them make mistakes. And I assess. I go in there and I try and figure out, oh, you’ve been taught this, but I’m actually going to train you in a slightly different way. This is where you go. Sorry. I’m going to fix this in a way that I think is more empirically based or evidence-based.

So those touchpoints, the interpretation of tests obviously is one of our biggest [00:46:00] competencies in assessment. Administration, coding, scoring, and interpreting, I think is where our especially doctoral programs do a pretty good job. They tend to focus on that. Then we get into the Harrier parts, which are integrating data, reconciling discrepancies is in data when two different measures or methods or informants say multiple things. And I think programs don’t do a good job of training in that. And then conceptualizing, which is taking all of the integrated data and tying it to a psychological theory.

This is again, an area where I think doctoral students get less training. I think they’re expected to pick it up intuitively or get it through supervision somehow. So I am very deliberate about it. Again, I’m going to teach you the model. I’m going to let you try it. I am going to correct you in front of you. I’m going to say this was amazing. This was an amazing first stab at it. Here’s how I might categorize these though. Here are some themes that are a little bit different than I think your thinking about them. Or because I know this test a little bit better than you, although it’s called this, although there’s index is called this, it’s actually a little bit different than that.

I was just griping about, I will apologize to Susie later, I was just griping about block design being a visual-spatial test when it’s so much more than that. It’s not that it’s a bad test for estimating, that’s great, but to call it visual-spatial when someone can take it because of slow processing speed, psychomotor speed, psychomotor integration problems, there are a host of skills that go into this. And so many supervisors just come in and say, oh, that’s visual-spatial.

So we need to help them along. I need to assess what do they know or what did they think they know? [00:48:00] How do they think about these tests? How do they think about the data? And then, do I need to guide them a little bit, or do I need to actually stop them and reteach them some certain things about certain tests or measures or data that emerge? 

Dr. Sharp: Yeah. This is such a hard place for me. I wonder, do you have any systematic means of doing this? I’m glad to see you nodding. Do you have any systematic means? Because it seems like it’s so much of just, well, it depends, let’s look at the context and the context is literally different for every single client we take. So it ends up being this lengthy, very nuanced discussion that I never feel like I really cover everything.

Dr. Jordan: Yeah. And when it’s so nuanced, we end up with reports that say things like, “People with this profile often…” It’s my biggest pet peeve. I do not want to see that. I want to see about this person. Is that true for this person?

My first book, Conducting Psychological Assessment, the 2nd edition is coming out this November. It spells out a very concrete, explicit model for doing assessment and assessment supervision and training and assessment and all that kind of stuff. That makes the process almost rigid, almost concrete in the way you think about each piece of data and where it fits and all that kind of stuff because we know the content of assessments gets overwhelming.

I don’t want the process to be overwhelming as well. So we make the process of it pretty systematic and it’s tedious, but it ends up doing two things: One is we make fewer assumptions. We all have biases and we cover our, it’s sort of a CYA model, [00:50:00] Cover our ass model. It covers us for any of these blind spots that we have or any data that we don’t like or don’t quite fit. So we just ignore them. It fixes that.

Also, I find that especially forensically, when I use this model, it makes it so much easier to testify, to justify, especially when there’s discrepant data. When one measure says this and three measures say that, I’ve got it in a very clear, specific method and outline that I can go back to that and find out exactly why I made that clinical decision, why I decided to throw that piece of data out as error or reconcile it in a certain way. I don’t have to go through my beautifully written narrative report. I can go back to the step before that in the model. That is spelled out in the conducting psychological book. 2nd edition is out in November.

Dr. Sharp: Nice. I’ll put that in the show notes. That’s super exciting because I feel like that’s a big one. How many times have I gotten the question, have you gotten the question, the rating scales say that he has ADHD, but I didn’t see any cognitive results pointing in that direction or some variation thereof?

Dr. Jordan:  Absolutely.

Dr. Sharp: Mom reports.

Dr. Jordan: Yeah. Mom report this, teacher reports this, what do I do? No, there’s a model of reconciling conflicting data or discrepant data. Get methodical about it. Get rigid about it.

Dr. Sharp: Great.

Dr. Jordan: So then it comes to the writing process. In the writing process, I suspect I do similarly to most people. When I ask for a first draft, I ask for a complete first draft from students, the first time they sent me that first draft, I meet with them. And it’s another touchpoint because it is an assessment.

So I meet with them and I ask them, what was it like to write this? How hard is it? There’s a whole section on the Essentials of Psychological Assessment Supervision [00:52:00] that Hadas Pade and I wrote on supervising report writing and we dedicated an entire section to procrastination and how to deal with procrastination. So I highly recommend that because it is probably the biggest problem because writing could be tedious.

So I do both objective and subjective assessments. I look at the report draft. I ask them about the process, what they thought about it, what they felt confident in, what they didn’t feel super confident in, and then I send them away. I don’t edit in front of them necessarily. And then I put a whole bunch of comments. In Word, you can comment bubbles and stuff like that.

There are little things that I might correct like verb tense. I’m big on verb tense. I don’t like, “The client reports this.” That’s an ongoing present tense. They reported it. That’s in the past. That’s happened. They might’ve reported that they are something which is present tense or they might’ve reported that they were something we just past. The other big pet peeve is client said this. I hate it. Either write the client or write their name. I get that we do that in progress notes, but do not do it in a report or I will throw you out the window.

So, then the report is back and forth. We share it through box back and forth. Again, it can be a lot of drafts and I basically give them an invitation. I’m like, if, and when you feel you need to meet, let me know. So it’s an optional touchpoint. Early on, most of them do want to meet at least to talk about diagnosis recommendations. Later on, they might give it a try, and then I’ll put some comments like, oh, well, have you thought about this diagnosis? Or have you read this article about why this differential diagnosis may not be quite so accurate or whatever, but in general, we meet if, and when they feel [00:54:00] in the writing process they want to meet with me. I try and be a resource for them at that point.

And then once the report is done and signed, we do two more things, actually, three more things. One is, I have gotten into the habit now of doing a feedback presentation. So I have my students actually create a PowerPoint to guide the feedback session that summarizes what the questions were, and some behavioral observations and takes you through the findings, and the results so that we don’t have to read through the entire report.

Parents can do that. Clients can do that. Teachers can do that later on. But I find it’s actually really useful, one for organizing the feedback session, and two there’s something a little bit protective and safer about having that outside object for them to look at. So they don’t have to look you in the eye the entire time because that can be very vulnerable. They can be very vulnerable to look somebody in the eye. It’s such a weird process. Like I’m about to tell you all about yourself and you know yourself way better than I do.

Dr. Sharp: So true.

Dr. Jordan: Here it goes. And I have to tell you some hard stuff. It’s sometimes can be almost like a security blanket to have a laptop in the room with a screen, even if it’s just a static screen, even if it’s on that finding for 20 minutes and we’re processing that finding, they can still look back at it as a safety measure.

Dr. Sharp: Yeah. So you’re doing that, just to be clear though, they create that PowerPoint presentation for feedback for the client, and they’re presenting to the client. Are you doing that literally just with a laptop in the room that the client looks at or?

Dr. Jordan: Yeah.

Dr. Sharp: Okay.

Dr. Jordan: Right now it’s a lot through zoom and we share the screen, but yeah, I bring a laptop in the room. I do this with my own clients. I bring a laptop in the room and I [00:56:00] have my PowerPoint presentation on and it shows my results. I set it up so that it animates the page so it’s one result at a time, it’s not overwhelming and so that they can see it in writing, but not have a full report. That’s super distracting.

Dr. Sharp: Oh gosh. Yeah. I got you. That’s great.

Dr. Jordan: So we meet, we discuss the feedback presentation. We also discuss a model for feedback. And I think there are a lot of considerations. In the Conducting Psychological Assessment book, I have a chapter on feedback and all the considerations we need to take that we sometimes don’t think about.  And so, I take a student or a supervisee through that part and say, okay, how do we want to frame this? Do we want to frame it with a diagnosis first with explanation? Do we want to make it a murder mystery and give you all the data and at the very end, by the way, this is your diagnosis? Do we want to break it up? If they end up with a cognitive diagnosis and a personality or emotional diagnosis, do we want to break it up that way?

So we need to structure this session in a way that is most appropriate- what we think is going to be most appropriate for the client. Again, I’m assessing the supervisee to see how comfortable they are thinking aloud about this. Thinking about the general intelligence level, the general psychological mindedness of a client, the general level of defensiveness, stage of change. Are they firmly in that pre-contemplation stage where they’re really not hearing much? Then I need to think of presenting this in a slightly different way than if they came in saying I’m depressed and I found they are depressed. Don’t bury the lead, like, by the way, you’re depressed, and here’s the dynamics or the conceptualization or whatever that was uncovered. So I’m assessing the student guiding or saging. That’s not the right word. Guiding or teaching, right? Less or more [00:58:00] directive in terms of helping them structure that feedback session.

They do the feedback session, and then the final thing, this is one area where I think a lot of testing supervisors maybe take a shortcut is closing the loop. I’m a big believer in closing the loop. I’m a big believer in having a supervision session after a whole testing case is done to think back about the process, to have the student think back about where they struggled, and where they feel they got more competent so that we can think about that for the next testing that they have to do with me, where they did not agree with my method, if they had a problem with the way that I said something. I can be a little glib and it can be a little flippant. Sometimes students take that the wrong way. I will apologize. And I will know for the next time to use a little less humor and a little bit more seriousness with this student.

So, I ask for feedback for me. I really want to close the loop. I really want to hear about how the feedback session went, but I also want to think about the entire process as a whole, as part of my assessment. You talked about the end of internship. That is a summative assessment. It’s at the end. I give you a grade. It is a meaningful grade. All of these assessments that I’m talking about is formative assessment. It is an assessment that is meant to be given as feedback so you can grow not to evaluate you. It’s not evaluative. I’m not going to send it back to your DCT, anything like that. This is to help our process and to help you grow.

And that’s super important for them to know from the beginning all the way to the end because we were about to do another case and start this process all over again. I can take a few more shortcuts in the assessment as we go on through the year because I used the past case as objective data for this case. So I know where [01:00:00] more or less you stand in terms of your competency in these different touchpoints. 

Dr. Sharp: Sure. I like that distinction and it seems to be easier to build a relationship that way as well.

Dr. Jordan: Usually. I mean, sometimes yes, sometimes no. There are certainly students and supervisees who are so uncomfortable doing it in this way, working in this way. And I get that. But we need to be okay with discomfort. One of my mantras in my program with my students all the time is you need to know the difference between uncomfortable and unsafe. We live in uncomfortable, right? We get paid to live in uncomfortable. That’s different than unsafe. If you feel like you might compensate, if you feel like you really aren’t safe, I need to know about it, I want to hear about it, but let’s not confuse the two. If you’re uncomfortable, that’s okay. 

Dr. Sharp: Yeah. Again, a great distinction. My gosh. I feel like I’d be totally remorse. Some people would shoot me and stop listening to the podcast if I didn’t go back, well, this is way back out of context, and ask if you have any quick or even just one thing to help with report writing procrastination.

Dr. Jordan: Yeah. There are models of procrastination out there. It’s amazing. Some of the literature on procrastination, we’ve summarized it in this chapter, but basically, you need to first decide if it’s passive procrastination or active procrastination. There is a difference.

Passive, we see a lot more often probably are just things are getting in the way: personal issues, anxieties, self-esteem, these sorts of things just get in the way of me getting the work done. Active procrastination are people who get excited and energized by doing it at the last minute, but they tend to meet their deadlines.

So if it is active procrastination, give them a deadline and break it into component parts. If it’s not a full draft, [01:02:00] say, I want your background section by this date. And those active procrastinators may put it off till the night before, but they’ll get it to you. The passive procrastinators, you need to move to the next step, which is to try and figure out and try it and assess what it is that’s getting in the way. And that usually involves, I was going to say a mini counseling session, a conversation with the supervisee about what they think is getting in the way, what might be getting in the way, and a whole lot of validating, right?

You shouldn’t have really great self-esteem about report writing yet. I get that. I expect it to be terrible. I want it to be terrible. I want you to make all those mistakes now so that I can help you fix them, teach you so that four years from now when you’re a licensed psychologist, you don’t make those mistakes. So be okay sending me crap. Be okay.

It’s a lot of validating any insecurities, that kind of stuff, and then you come up with a plan to address that. So, if that conversation doesn’t work, then you collaborate with them as much as possible to come up with a plan. It has gotten to the point sometimes where I have had to set aside an hour and have a student come in and work on my computer. I will sit next to them and I’ll do work on my phone or whatever, and they will sit next to me and they will report write because they, for whatever reason, cannot drag themselves to do it.  If that makes sense.

Dr. Sharp: Sure. I like that. Thanks for indulging that question. Like I said, I know people are like, how do we meet this? Got it.

Dr. Jordan: Yeah, for ourselves too, right? 

Dr. Sharp: Oh yeah, that was a thinly veiled question to help all of us too, and not just our supervisees.

Dr. Jordan: Absolutely. One thing I did want to go back to was this idea of assuming that everybody gets it or everybody’s doing it the way we expect them to do it and that kind of stuff. [01:04:00] One of the resources that I did want to point out it’s in the essential book, it’s also in the handbook of psychological assessment, is a rubric that I put together with a team of people at SPA. We put it together. We called it the proficiency rubric. Technically, it’s a rubric for report writing. It’s a rubric to look at a report and see, does it meet all of these standards?

I make my students look at their own reports before submitting them to me from the vantage point of this rubric. I make them rate themselves on the rubric. And it’s got things from, is it multi-method, are we integrating data seamlessly or is it written tests by test? Are we incorporating diversity in an explicit and deliberate way? Are the test findings valid? It gets very basic like that all the way to writing style. And is there a lot of jargon? Have we found an easier way to say this?

An example that I give very often in my classes is I had a client who I was giving feedback to, and she had a major depressive disorder, and I told her this, and she started crying and she said, I knew I was depressed, I just didn’t think it was major. Until then, I hadn’t really thought this is complete jargon that we call it major depressive disorder. It’s just depression. That’s just depression. It’s just the way we call it. There is no minor depressive disorder. That just doesn’t exist. So have we found ways that to write?

The rubric was designed to rate a report, but it is a proxy for the entire process. As you can hear, some of those items are really about how client-centered is this? How comprehensive is it? How valid is it? So it’s a rubric for the entire process of assessment [01:06:00] and it helps us again, cover our bases in the assumption that our supervisees are doing all of these things like, oh, I know we talked about diversity issues, but I forgot to write it in the report. I forgot to be deliberate about that, or I decided not to and this is why. I make them describe that to me. Why did you decide not to do this even though we talked about the fact that this test is problematic with somebody from this cultural background?

So, I want them to think through that, but I don’t want to forget it. I’m supervising so many assessments all the time. If we meet and we talk about this diversity problem, and then three weeks later, you turn in a report draft, I may forget that we talked about that diversity problem. So I make them go through that rubric, submit it with their first draft to me and I go over it with them. It’s just a good tool for helping you cover your bases. 

Dr. Sharp: Absolutely. Yeah, that sounds like an amazing tool. I’ve thought about it. I think I’ve probably piecemealed maybe components of that informally over the years. And to know that that’s out there is pretty awesome.

Dr. Jordan: We have a fillable PDF of it. We are happy to send it out to anybody. 

Dr. Sharp: Nice. I would imagine people being very interested in them. Yeah. Fantastic. Gosh, what else is there with this? We’ve talked a lot about, I don’t know, I feel like this is pretty comprehensive. Are there any parting thoughts? I guess I found myself wondering throughout this process or this talk, if there are any hotspots that you notice coming up over and over with supervisees that you can distill, things to look for, really watch out for that we haven’t touched on?

Dr. Jordan: Yeah. Well, there are two things I want you to say. One is I do want to acknowledge the privilege that I have to be able to do a lot of supervision and a lot of supervision [01:08:00] hours with my students. That is a privilege I know that a lot of supervisors out there don’t have. So this touchpoint model may have to be truncated and may have to be abbreviated. You may not get an hour to discuss hypothesis and test selection. It may be an email, depending on the level of the supervisee. So I do want to acknowledge that.

In terms of hotspots, the thing that I have noticed the most, I think, and this may be an artifact of the clinical training programs in my area is I think people, my supervisees, for the most part, learn really well, how to interpret tests and don’t at all learn how to integrate that data to talk about a person.

I run an internship, externship, all this and I ask for a sample report and I get so many that are written test by test, and they’ll have an entire paragraph about what VCI is on the WISC, an entire paragraph that doesn’t mention the person once. It’s lit. It’s beautiful. It does explain the VCI, but why not talk about the person’s verbal ability? Why not just say this is their verbal ability? Why does the reader know?

Is it absolutely necessary for our reader to know how the test tested for this skill? Is it absolutely necessary for them to see how the sausage is made or can we say, in a scientific way as we can, can we put it in language? I try and balance it. I try and put it in language that is very clear and about the person, and then in parentheses, I’ll put the science. I’ll put this sub-test or the WISC VCI index, this percentile, or whatever. I’ll put that in there so that they know that it’s not just me thinking about their verbal ability. Like, oh, they talk good. I don’t know.

[01:10:00] I want them to know that this is based on science, but I want it to be about the person and very readable. This is I think the number one area of growth that I see in my supervisees in addition to the integration of data, discrepancies, and conceptualization. I think conceptualization is the forgotten stepchild in this whole process. I just realized that’s probably a really offensive term. I am a stepchild. I don’t know where it came from. It’s a horrible term. It’s the forgotten family member in this entire process. 

I see these reports and I talk to these supervisees about what is going on for a person, and they’re like, well, they have weak this and poor identity development and this and this. And it’s literally just 4 unconnected findings. I’m like, okay, but how can we think about this as a psychologist? That’s a really good way of thinking of it like a psychometrist.

You’ve just reported what the tests found. Now let’s think like a psychologist and put these together based on what we know about human development, what we know about psychopathology, what we know about psychodynamic theory, if you’re a psychodynamic or a CBT if you’re a CB. What we know about theory, we need to tie it. We need to ground our findings in some sort of psychological theory. And I need to see that in the testing process and in the report and feedback to know that we are thinking like psychologists. And this is an area that I think we don’t focus enough on, and it’s a real area of growth for many supervisees that I work with. 

Dr. Sharp: Yeah, I completely agree. That’s what we get paid. That’s why we’re here. Everything that you described up to that point, a computer could do, right?

Dr. Jordan: For the most part. Yeah.

Dr. Sharp: Yeah. I’m totally [01:12:00] with you. That’s where we got to spend our time and our training.

Well, this has been great. I’ll close with a question. I’ve listed a lot of resources just as we’ve been talking. I do know that there are, I think it’s California, folks have to have six hours of assessment continuing education. People are always asking, do you know of other resources for CEs specifically for assessment supervision?

Dr. Jordan: Specifically for assessment supervision? No. So many people have asked if the essentials book comes with CEs or not? For whatever reason, we just didn’t do that. I don’t know why. Some of my books do and some don’t. I usually just let the publisher decide which ones. I’m not sure why we didn’t do it for this one.

Specifically on assessment supervision, for CEs, there’s not much out there that I know of. The sessions that I was talking about at SPA were CE credit-bearing sessions. So I would say look to SPA. I think SPA likely will in next year, I suspect it will be virtual, which is great. I come from the one holdout state that doesn’t require CEs. Amazingly in New York, we don’t need them yet. We’re starting in 2021, I think. 

Dr. Sharp: That was us until a year ago. Yeah, it’s crazy.

Dr. Jordan: It is crazy. I was the chair of the APA CE committee and I didn’t even need to do CE. I say, look there. The call for workshops, the CE workshops is out now. I know for SPA. But also they tie CEs to a lot of their individual sessions. So I’m hopeful that some of my colleagues and I will try and replicate some of these more [01:14:00] workshop-based assessment supervision CE offerings at SPA next March. 

Dr. Sharp: Alright, well, thanks again for coming on. This is another fantastic conversation.

Dr. Jordan: Thanks for having me.

Dr. Sharp: Okay, everyone. Thank you so much for tuning into my episode with Dr. Jordan Wright. Jordan is such a dynamic speaker and it was a pleasure to have him back on. I personally learned a ton about assessment supervision and I’m going to be making some tweaks to my process with our intern and post-docs. I hope that y’all will be doing the same. There are tons of links in the show notes, several books, and other resources that you might find helpful in this assessment process.

Like I said at the beginning, if you’re an advanced practice owner and you are interested in really making yourself accountable for the changes that you have always wanted to make in your practice or reaching those goals that you haven’t quite been able to reach thus far, I think the advanced practice mastermind is something for you to check out.

This is a group coaching experience where other psychologists will be in the same boat as yourself. We really work together to keep one another accountable. We set goals. We check-in. The idea is that you level up your testing practice alongside 5 or 6 other psychologists just like you. If that sounds interesting, check it out at thetestingpsychologist.com/advanced and schedule a pre-group call to figure out if it’s a good fit for you. We do have I think at this point, two spots left in the group as of this recording anyway. So, we’d love to talk with you and see if it’s a good fit.

All right. I hope you are all hanging in there. School has probably started for most of you, maybe not for some of you. We are certainly scrambling. We had our pod [01:16:00] fall apart two days before school was supposed to start. So I hope that you all are navigating this whole situation as best you can and taking care of yourselves and your families.

All right. I will talk to you next time. Bye, bye.

The information contained in this podcast and on The Testing Psychologist website is intended for informational and educational purposes only. Nothing in this podcast or on the website is intended to be a substitute for professional, psychological, psychiatric, or medical advice, diagnosis, or treatment.

Please note that no doctor-patient relationship is formed here, and similarly, no supervisory or consultative relationship is formed between the host or guests of this podcast and listeners of this podcast. If you need the qualified advice of any mental health practitioner or medical provider, please seek one in your area. Similarly, if you need supervision on clinical matters, please find a supervisor with expertise that fits your needs.

Click here to listen instead!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.