This episode is brought to you by PAR.
PAR offers the RIAS-2 and RIST-2 remote to remotely assess or screen clients for intelligence. They also offer in-person e-stimulus books for these two tests for in-person administration. Learn more at parinc.com.
Hello everyone, and welcome back to the podcast.
Today, I’m trying something a little bit new. I’d like to start to periodically throw in a research spotlight episode to highlight recent publications in the neuropsychology realm. Today’s episode is discussing a 2023 [00:01:00] meta-analysis published in the Journal of Pediatric Neuropsychology called “Tele-neuropsychological Assessment of Children and Young People: A Systematic Review.”
Without further ado, let’s dive in.
All right, I will get right to it with this research spotlight.
Many of us utilized teleassessment during the COVID-19 pandemic primarily as a response to the limitations on in-person gatherings, right? We couldn’t gather together in the office, but kids still needed to be assessed and we found a way to do it.
Since then, I think most of us have returned to in-person assessment for the most part. We’re continuing to [00:02:00] do intakes and feedbacks remotely, but the testing itself has been in person for a long time now in our practice.
That said, there are some groups out there who’ve been practicing tele assessment since long before COVID, and they continue to do so. For the rest of us, COVID-19 opened up the possibility of teleassessment being a viable practice alternative. I know several folks now who maintain an entire teleassessment practice that they did not have before the pandemic.
The cool thing is that we’re now at the point where a few years out, we’re coming out of COVID and we’re now at the point where there are several studies looking at the impact and feasibility of tele assessment in kids, hence this meta-analysis.
A little bit of info about this meta-analysis.
There were, well, as you all know, I hope, [00:03:00] meta-analysis is essentially a research article about research articles. This particular meta-analysis included 21 existing studies. They ran the gamut in terms of assessment measures used, though most of the studies did look at multiple cognitive domains and all of the measures used were adapted from existing standardized measures.
The WISC was the most common IQ assessment in the study, and the CELF, I say CELF, I’m not sure what others say, was the most common measure used to assess language. Interestingly, only two studies, [00:04:00] maybe just one of the studies included used Freestanding PVTs or SVTs and they did not report the results. So we’ll circle back to that later as a limitation.
Let’s see. 21 studies included. Generally speaking, they discussed the idea that environmental and technical difficulties most likely occurred once within an individual testing appointment rather than across sessions. At least, well, about a quarter of folks had to borrow advice to complete the tele-assessment.
Let’s see. What else? What other factors popped out here?[00:05:00] There’s a relationship or correlation between not having a device and attending tele assessment appointments, which makes sense to me. Folks that did not have access to a device were more likely to not attend a tele-assessment appointment.
As far as feasibility, most studies said that tech difficulties were the primary difficulty, which would make sense. However, to me, it was lower than I expected it to be. So slow bandwidth or compromised audio quality only disrupted about 6% of the sessions, and on the whole across multiple studies seems like environmental distractions and tech difficulties were typically pretty short-lived. And as [00:06:00] best anyone could tell did not invalidate the test performance or bring a stop to the assessment.
Also, interestingly to me, there weren’t any behavioral observations or behavioral differences between teleassessment and in-person that were noted in any kind of consistent or meaningful way, aside from complaints about the audio or visual quality.
All right. Where do we go from here?
A few other points of discussion. Generally speaking, feedback on teleassessment was positive. The participants completed the assessment at a high rate. There were largely strong relationships between the scores from teleassessment and in-person assessment [00:07:00], particularly for kids that were older than 3 years. So that is another limitation that I’ll mention later. But there were not many studies for kids under 3 years and it seems like the reliability took a dip with younger kids. For the most part, there weren’t many barriers that were reported, which is great.
Let’s take a break to hear from our featured partner.
The RIAS-2 and RIST-2 are trusted gold standard tests of intelligence. For clinicians using teleassessment, PAR offers the RIAS-2 Remote, which allows you to remotely assess clients, and the RIST-2 Remote, which lets you screen clients remotely for general intelligence.
For those practicing in the office, PAR has in-person e-stimulus books for both the RIAS-2 and the RIST-2. These are electronic [00:08:00] versions of the original paper stimulus books that are an equivalent, convenient, and more hygienic alternative when administering these tests in person. Learn more at parinc.com/rias2_remote.
All right, let’s get back to the podcast.
Let’s talk about reliability. It was generally pretty good. When I say reliability, I mean, between tele-assessment and in-person assessment. The areas where it varied most were in speech-language and reading comprehension, which the authors hypothesize might’ve been due to the audio and visual challenges interrupting those language-based tests.
There were a number of variables that were inconsistent [00:09:00] across the studies. They had differences in the study design. They had differences in the teleassessment setup, for example, how many cameras they had: whether they had one, a desktop camera, or a laptop, or how they showed the materials.
There were differences in the statistical analysis and the timeframe for the studies as well. Some were very short. Some had repeated investigations. Sorry, I could not think of the word. They tested the folks multiple times, whereas some only tested the kids once.
The authors discussed how teleassessment was a little bit, you know, it was lacking a little bit in terms of [00:10:00] reliability for executive functioning measures and processing speed particularly. They also discussed generalizability.
One of the major complaints of the studies that were included in this meta-analysis is that very few were anything approaching large in terms of sample size. Most had relatively small samples and large age ranges. A lot of them were pilot studies or feasibility studies. The majority were conducted in the United States. There wasn’t a ton of overlap in the teleassessment across studies.
Let’s see, what else is important?
Other complaints or other directions for the future would be including free-standing PVTs [00:11:00] or SVTs. At this point, I think the majority of us are doing PVTs during in-person evaluations. And so, doing the same thing in teleassessment would be very important.
The authors also talked about how at this point, there are a few settings where teleassessment is not 100% appropriate. They mentioned specifically adolescent forensic settings. They also mentioned the need to update privacy and informed consent and adapt for teleassessment, especially around the increased transfer of electronic information.
Let’s see. What else?[00:12:00] They did talk a bit about psychometrics and how the authors did not dive deep into technical reports for some of these tests. So that was also a shortcoming of the study.
And perhaps lastly, they also mentioned that children from marginalized groups were underrepresented in these studies. So I think this is one of the most important factors to take into account because a large reason for teleassessment is to provide access to folks who may not be able to make it into the office or who may not otherwise have access to a neuropsychological assessment.
And those individuals, those kids were also at increased risk to [00:13:00] not have access to the internet or technology to access school materials and so forth. And so I think that bled into this area as well, where it was more difficult to include these children from marginalized communities in these studies. And so that was a major shortcoming that the authors found and rightly commented on. They leaned on the need to look in the future toward including these kids.
And of course, the authors note that this is preliminary and it’s one of the first meta-analyses to look at teleassessment post-pandemic. So it’s an [00:14:00] emerging area of study, but all in all, they conclude that the “evidence from research studies indicates that pediatric teleneuropsychology and clinical and nonclinical populations is feasible and acceptable.”
They cite preliminary evidence for the reliability of some assessment measures, but they also mentioned, as I said, many shortcomings with performance validity not being included, most studies being small, mostly white samples with kids over 3 years. And so the generalizability was a little lacking at this point.
Now, all that said, there are some fantastic groups out there who are doing really good work and research into teleneuropsychology.
You may have heard of Lana Harder and her group down in Texas. She has been looking [00:15:00] into teleneuropsychology with kids for years, and you can always do a Google Scholar search and find some of her research. She was also on the Navigating Neuropsychology podcast maybe a year or two ago.
Jordan Wright is doing quite a bit of work in this area. I highly recommend that you check out his book I think it’s called Essentials of teleneuropsychology assessment. Oh my gosh. I’m sorry. Jordan, if you’re listening, my apologies, I will look up the correct title and post that in the show notes as well. Jordan has done a fair amount of publishing in teleassessment also.
All right, this is an emerging field and I know that in our practice we are going to continue to incorporate teleassessment in some form or fashion. It’s not going away. And I imagine that we’re going to see a lot more research in the coming years.
All right, y’all. Thank you so much for tuning into this [00:16:00] episode. Always grateful to have you here. I hope that you take away some information that you can implement in your practice and your life. Any resources that we mentioned during the episode will be listed in the show notes. So make sure to check those out. If you like what you hear on the podcast, I would be so grateful if you left a review on iTunes, Spotify, or wherever you listen to your podcasts.
And if you’re a practice owner or aspiring practice owner, I’d invite you to check out The Testing Psychologist Mastermind groups. I have mastermind groups at every stage of practice development: beginner, intermediate, and advanced. We have homework, we have accountability, we have support, we have resources. These groups are amazing. We do a lot of work and a lot of connecting. If that sounds interesting to you, you can check out the details at thetestingpsychologist.com/consulting. You can sign up for a pre-group phone call and we will chat and figure out if [00:17:00] a group could be a good fit for you. Thanks so much.
The information contained in this podcast and on The Testing Psychologist website is intended for informational and educational purposes only. Nothing in this podcast or on the website is intended to be a substitute for professional, psychological, psychiatric, or medical advice, diagnosis, or treatment.
Please note that no doctor-patient relationship is formed here, and similarly, no supervisory or consultative relationship is formed between the host or guests of this podcast and listeners of this podcast. If you need the qualified advice of any mental health [00:18:00] practitioner or medical provider, please seek one in your area. Similarly, if you need supervision on clinical matters, please find a supervisor with expertise that fits your needs.