382 Transcript

Dr. Jeremy Sharp Transcripts Leave a Comment

[00:00:00] Dr. Sharp: Hello everyone. Welcome to The Testing Psychologist podcast, the podcast where we talk all about the business and practice of psychological and neuropsychological assessment. I’m your host, Dr. Jeremy Sharp, licensed psychologist, group practice owner, and private practice coach.

This podcast is brought to you by PAR.

So that you can help more patients, PAR is committed to offering their most popular tests for administration in foreign languages. For a full listing of available tests, visit parinc.com\products\foreign-language.

Hey everybody. Welcome back to the podcast. I am glad to be here with you. Today, I’m talking about a topic that is very important for a lot of us and yet misunderstood by a lot of us. That process is test development. We all use these tests, but at least speaking for myself, I do not have an intimate knowledge of how a test goes from [00:01:00] idea to reality.

That’s what we’re talking about today. I’ve got my friends and colleagues from PAR here to chat with us. I am talking with Melissa Messer, Dr. Carrie Champ Morera, and Kathryn Stubleski. They all play different but crucial roles in the test development process at PAR.

We’re talking through the primary steps along the test development path, we talk about how test publishers decide which tests get developed, we talk about data collection and we talk about how we as clinicians can participate in the test development process, which is interesting to me, and hopefully interesting to some of you as well. Spoiler, we can get compensated for participating in test development.

I hope to stick around and listen to this fascinating episode about test development from start to finish.

And if you’re a practice owner and you [00:02:00] would like to get some support from other practice owners, I would encourage you to reach out. The Testing Psychologist mastermind group might be a good fit for you. This is a group coaching experience with other psychologists who specialize in testing. We work on business development, support one another, challenge one another, and help reach those goals that you may have set for your practice. If that sounds interesting, you can go to thetestingpsychologist.com/consulting.

All right, let’s get to this conversation about test development.

Hey y’all, welcome to the podcast. I am glad to have you here today. I would love to start with some introductions so that folks can [00:03:00] get a sense of who they’re listening to and what you all do. Melissa, can you start us off, please? 

Melissa: Sure. My name is Melissa Messer. I’m the Vice President of R&D at PAR and the Chief Product Officer. I’ve been with PAR for almost 21 years and I’m responsible for all of the print and digital product development within the organization. 

Dr. Sharp: Awesome. Kathryn. 

Kathryn: Thank you for having me. I’m Kathryn Stubleski. I’m the data collection coordinator here within research and development at PAR. I’ve been here for about two and a half years and I’m responsible for collecting data on the products that we’re developing.

Dr. Sharp: Sounds like an important job. Carrie, folks will probably recognize your name at least, but tell them who you are and what you do.

Dr. Carrie: Thank you, Jeremy. [00:04:00] I’m so happy to be here. I’m Carrie Champ Morera. This is my 2nd time on The Testing Psychologist podcast. Wow. It’s been about 3 years already. 2020 Daniel McFadden and I made an appearance. At that time, we were in the midst of the pandemic and we were talking about remote assessment and helping the clinicians make adjustments so that they could do some testing online.

Currently, my role at PAR is the director of content and production. In my role, I oversee the development process and work with a very talented team of project directors, research assistants, and interns, as well as managing production editor, and we oversee the development process from the acquisition phase through the completion phase. We also develop a lot of content within our team as well and put out a lot of scholarly marketing efforts.

I’ve been with PAR for about 4.5 years and am happy to be back here.

Dr. Sharp: Nice. Happy to have you again. [00:05:00] Thanks.

I’m really excited to talk with y’all. I’m looking forward to pulling back the curtain on the test development process during our conversation today is the way that I’m thinking of it because while some of us practicing folks have an idea of how tests are developed, I think that if you were to stop any psychologists on the street and force them to tell you how a test is developed, it would not be anywhere close to what actually happens. I feel like we get the high points maybe, but there’s a lot more that happens behind the scenes. And I’m really looking forward to diving in and hearing from y’all how this whole process actually goes

Melissa: We’re so excited too.

Dr. Carrie: Happy to demystify the process for everyone.

Dr. Sharp: Yeah. It’s such an important part of what we do. I think we should know more about it. [00:06:00] I am curious, just to provide a little bit of orientation and grounding, are all of you housed in the R&D department? Is that what test development is called in a test publishing company? Is it R&D?

Melissa: Sure. I’m happy to field this question. Our department actually is called R&D but we think of ourselves as the Product Division, if you will. That’s our focus area. Our team is pretty large. We actually keep growing. And we have a diverse set of focus areas because we’re managing so much across just product.

So as Carrie said, her role is overseeing our content and production. So that is our psychologist, that is our editorial staff, our research assistants- they are the project leads across all the projects we’re working on. That always ranges but they’re [00:07:00] managing over 35 projects right now.

And then Kathryn’s group, we’ve talked about she managing data collection. And again, that goes hand in hand with how many projects we have ongoing. And so that covers the recruitment part, managing the data sets, working with outside vendors, and managing the incoming data for quality control, and things like that.

Then we have a whole Digital Division. And so that is our technical product owners. We have content expertise in there that covers the transition of the work that Carrie’s Group is doing over to our digital platform. So, requirements writing. It’s this interesting newer area for us is where becoming a digital company. We’re spending a lot of time in that area.

And then lastly, we have Quality Assurance as part of our team. Sometimes they’re housed in IT but they’re housed with us and they perform [00:08:00] QA work on all of the things we create. So it’s not just the technical digital pieces. It’s the norms we create, the syntax we write, and every table in the manual they’re actually seeing that all those numbers are correct.

Dr. Sharp: Got you. I was going to ask about Q&A in this context. It’s funny. My dad was the Q&A manager for a nuclear construction company the whole time I was growing up and I didn’t really know what that meant until much later in my life, but realized that’s a pretty important job, right? So I’m curious, you said a little bit about that, but what exactly does Q&A look like here in test development?

Melissa: We have right now a team of 4. We have a manager over the team who doesn’t come from the assessment world but had a lot of Q&A experience, but some of this team is actually made up of folks who [00:09:00] served as CS Tech Support Staff. So they were manning the phones for years. They really knew the ins and outs of our products. And then we also have an Automation Specialist who does focus on the digital side.

That team has taken on a lot. I would say over the past 10 years, it’s really grown. Initially, the Q&A was of our software desktop products; so it’s like, once you build it, it’s there. And so then they’re not Q&Aing it anymore. Our print products work the same way, but with the evolution of a platform, it’s a full-time, 24/7 job to ensure quality control that everything is up and running, and that no bugs got introduced to the system. So it’s really changed the landscape of that.

But the reason they started an R&D did come from the concept that Q&A for us started around the premise of the product [00:10:00] themselves- what we were making was, a number didn’t get transposed or in a norm table look up, lines didn’t get off or on a carbonless rating form that everything lined up perfectly. And that has really evolved. It still includes that piece, but it’s really grown to a much larger responsibility. 

Dr. Sharp: Sure. It seems like there’s a lot more overlap now with the clinical world now that you’re moving into digital. It’s not just like you said, making sure things line up and look the right way. There’s some clinical components that go into that. 

Well, I’m very curious to hear about this whole test development process. And the first question I want to ask is one that I’ve been thinking about for many years, and I’m so glad that we get to talk about it now. And the idea is where do tests or ideas for tests actually originate? Are you [00:11:00] pulling people off the street or psychologists coming to you, or is it from Tiktok? Where do test ideas come from?

Dr. Carrie: I think it comes from a variety of different places. We have authors that will submit proposals to us. We have a system on online that anyone can feel free to fill out a form, share some information about their test idea, and then send it into us. So that’s one way.

Another way is through internally. And so we often will come up with great ideas to develop internally, and I’m sure Melissa can share some ideas that she has contributed to in terms of test development over the years.

Additionally, we have market research. So, sometimes we have maybe some ideas of maybe a problem that needs to be solved in the field. And so we’ll send out a questionnaire to customers, and we’ll get a lot of feedback from customers, either through the [00:12:00] market research or even through our customer service team and our sales team as well. And that may spur an idea for the development of a test.

And then additionally, we may find some experts in the fields that are really good at what they do, and we may invite them to come along and develop a test for us.

Dr. Sharp: Okay.

Melissa: I can just add to that that the trajectory of that has changed through the course of the company. And that early on at PAR, I would call it more passive in that a lot of tests came to us almost fully developed, if you will, and we were maybe more of a commercialization partner to assist with getting their product out in the market, maybe doing some data collection, maybe some fine-tuning. We were commercializing and providing marketing sales and customer support.

I would say in the second half of the company, PAR just celebrated its 45th anniversary, so in that second half, which [00:13:00] is around the time that I joined, we went to a much more active thinking about filling out our product line, finding holes where we wanted to have certain products. That turned on the engine of being much more active and deciding the kinds of products we wanted, where we thought that there were opportunities, where we were hearing. We had established authors at the time, so expanding that relationship and saying, like, you did so great at this test, have you ever thought about this other test?

And then we go into spending a good bit of our time doing revision. So it’s not all just new product development per se. It is as I always say, the care and feeding of the really robust IP that we’ve established over those 45 years takes up a lot of our R&D time.

Dr. Sharp: Yeah, I would imagine. I was going to ask that question about if you could even ballpark the percentage of development that you’re doing [00:14:00] that are truly new measures versus revising existing measures. I don’t know. This is just me guessing at things, which is always dangerous.

It seems like we have a ton of measures to get it the exact same thing. And so, I find myself asking, do we really need any new measures at this point aside from moving to digital and doing more advancement in that realm? So I’m curious. Are you finding that you’re truly developing new tests or are we mostly revising existing?

Melissa: I would say that it’s on a cycle always. And so we do hit cycles where we are very revision driven just because again, the nature of the timing of when some of our more popular tests were developed. And so the trajectory of those getting them revised together is a little bit of a challenge.

So it does ebb and [00:15:00] flow, but we do I would say have room for new product development. The percentage of time really varies, but I would say right now we’re probably at something around 20% ish. It’s a rough estimate. The new products I think are oftentimes really most in response to the customer feedback. They’re saying, although there are tools that do X, Y, and Z, it is not meeting some need.

It could be that a test was created for use in a psychoeducational setting, but that it’s not well adapted to maybe a neuropsychological evaluation. Even though it’s measuring the same construct they’re looking for, it’s just not meeting their needs. And so that’s oftentimes where those new things are.

And then, of course, with digital, the native digital development is an area where we’re not reinventing the construct per se, but we are reinventing, especially on performance-based, how you can measure [00:16:00] some of those. So digital opens up this Behemoth of data that we can collect that we couldn’t in a more traditional performance based in person, non-digital arena.

Dr. Sharp: That’s fair. Thanks for answering that. And this might be a good question for Carrie talking about the authors or the developers that y’all have worked with over the years. Do you find that you tend to go to folks as subject matter experts or research and development experts? I don’t know. I’m trying to think how to ask the question with a little more clarity. 

I think of some authors who have developed tests that measure a million different things- a bunch of different content domains because they’re just good test developers. They get statistics and they know how to put the items together and so forth. [00:17:00] But then there are some who stay within a narrow band of content because that’s their subject matter and then maybe someone else handles the research. I’m not sure. I’m curious how y’all approach that, and if you find that breaks up like that, or is there a different way to think of it?

Dr. Carrie: Sure. So I think, in more recent times, as Melissa said, the 1st half of PAR’s history versus the 2nd half of our history, I think that we are now focusing on working with a lot of authors who have that subject matter expertise. And so, we’ll have an author who will develop several different products that are in line with the area of expertise and that provide the opportunity to position themselves as experts in the field. And so they’re out there a lot promoting the work that they do, the work that they’re passionate about, what their expertise is. And that really helps support the product.

Whereas internally, in the [00:18:00] R&D department, we can then support all of those pieces that go into the test development in terms of the statistics, all the ins and outs of the item development, all of the digital work. So we’re doing a lot of the heavy lifting internally. That way, the author can focus on the content and give us additional ideas and input for which way we want to maybe tweak a scale or which way we want to go with the interpretive report and adding interventions.

And so then that frees them up to utilize their knowledge and it really makes a good partnership between PAR and the authors. We really value the relationship that we have with all of our authors. We want to work with them. We want them to be successful. We really want their input and we really see it as a partnership.

[00:19:00] Dr. Sharp: Yeah. I’m curious what you’re looking for. You mentioned the folks can submit requests or proposals, I suppose, through the website. So say I’m a psychologist just in practice, but I really want to develop a measure to get at something that I think would be important. What would someone need to come with to submit that proposal?

Melissa: We receive proposals. Oh, sorry, Carrie.

Dr. Carrie: Go ahead. That’s fine.

Melissa: We received proposals that are in all different states. So it could just be an idea and it could be all the way through they’ve pilot tested or they’re even probably halfway there to the finish line in finishing the development. And so that really has a huge impact on our ability to weigh in on what we think about the product, but also weigh in on how much impact we can have [00:20:00] on the product if it’s further along in development.

One of our core criteria is, does it really fit with our core customers? Our customers are traditionally clinical psychologists, school psychologists, and then you can start to brand out into neuropsychology, and forensics, but it’s traditionally Ph.D. level with some mix of master’s level professionals.

And so when we get a proposal for something that is for psychiatry, let’s say, that’s not an ideal perfect match in terms of who is already part of our customer base. And so we want to do someone’s product justice and give them a platform to be able to reach their ideal customer. And so if it’s not a perfect fit for us, they may have a wonderful product that is so well thought out and meets the need, it’s just not maybe ideal for PAR.

The 2nd is getting [00:21:00] feedback from our customers about the need. And so, we often engage in market research to get feedback from our current customers of their interest in it. Do they see the need? What the utility is? But then, of course, we look at, if stats available, we look at in terms of the credibility of, is it measuring what it purports to measure? Do we think that that can be substantiated with additional research?

And so we come at it a bunch of different ways. And oftentimes, we also already have a product that does what they’re proposing the test to do. And we have to consider that. We’re also not really interested in flooding the market with several me-too products that are going about generally the same way. So I think it has to add clinical or research value that we see as an addition to what’s already available in the marketplace.

Dr. Sharp: Yeah. You’ve said a lot about this already, but I wonder if there’s any more to add just [00:22:00] in terms of how you actually choose which tests or proposals you end up developing. Are there any more important factors to share about that process?

Melissa: Yeah, so it’s not just an R&D decision. We partner really closely with sales and marketing. So R&D can fall in love with an idea for a product, but lucky for us, our sales and marketing team are the one who are responsible for delivering it to our customers. So we really need engagement with them. Is this going to be a challenging test to market? Maybe it’s very complex or confusing or requires a lot of hardware or different things.

And so, if the team doesn’t feel supportive over its fitting, a great example is we’ve shied away from tests that have a huge lift in terms of manipulative components and details. We don’t [00:23:00] have a huge procurement department, so it’s just not our wheelhouse. So it’s just a decision we’ve made as an organization. And so we wouldn’t want R&D to just be the only one leading the charge to say, no, now have to add a procurement department and all these things. So, we consider those different angles.

We also consider things in terms of the author. Our relationships with them, as Carrie said, is very important. We consider them part of the PAR family and how collaboratively we work together. We want to work with people who are as committed, conscientious, hardworking as we are and really invested in the field, the ethics that we need to follow, and the standards. And so the fit with the person is probably just as important as the quality of the product and how it fits into our strategic plan.

The relationships are everything. And as Carrie said, [00:24:00] we’re lucky to work with some of the most interesting, intelligent people in our field. I know I geeked out when I first joined the company. I read about them in a textbook, or I used the test in the clinic, but to get to work with and form longstanding relationships and have additional mentors in our lives, it’s a really unique experience that a lot of people don’t get.

Dr. Sharp: Sure. I have the same experience in our Facebook group when authors will jump in and comment on questions about their own tests. Like, is this really happening? It’s a really cool experience. 

Dr. Carrie: Another thing to consider, too, is the timing in terms of development. We have a lot of great ideas. We have, Melissa said, 35 different active projects that we’re working through. We need to prioritize them. And so part of it comes to not only the author’s availability, but also looking at what is going on in the field and what do people need.

So, for example, if we [00:25:00] go back to when we were in the midst of the pandemic, and we were all scrambling for, oh, no, now we need something for remote, we bumped up some projects on our list to make those adjustments. And then we went back and looked at some of our paper manuals and worked really hard to convert them to digital manuals so more people had to have access. So again, it’s timing in terms of what are some things going on in the field.

Dr. Sharp: Got you. Nice. So let’s assume that you’ve decided to develop some tests or a test. Let’s just stay with one. We’ll keep it simple. What are the literal next steps that happen after you approve that proposal? Where do we go from there?

Dr. Carrie: You jump up and down and celebrate.

Dr. Sharp: Awesome. I love a good celebration to kick off.

Dr. Carrie: In all seriousness, there’s a lot of behind-the-scenes work that [00:26:00] takes place before the development can actually begin. There’s a whole contracting process with the author. Melissa controls that process and there’s this back-and-forth that goes through so we can get a contract under wraps.

And then once that is approved, we have a development plan that takes place. Our project directors work really hard in terms of putting that development plan together, and that really outlines the steps in terms of who is responsible for what? So, who will develop the items, who will write different parts of the manual, and who will do the data collection? It is a guide for what we plan to do at the beginning of the project all throughout all the different steps of the project and outlining those responsibilities.

And that takes some time to work through and then some negotiation with the author in terms of who’s going to do what. But that really helps set us up for success. And then we put some [00:27:00] timelines in there.

Another big piece before the project can get approved is working through the budget. It can be very expensive to develop tests. I don’t think a lot of people in the field realize how expensive it can be and all that goes into it. Data collection alone is very expensive. I know Kathryn will talk about that in a little while.

And then we also need to work on putting a project team together. So who from R&D will be a part of that? We often have sales, marketing, our IT group. It’s really just a cross-functional group of people that work together to develop the tests. And then we also work with internal and external experts as well. So that’s a very brief overview of what comes next. 

Dr. Sharp: Yeah. Well, it sounds like you sit down and try to create a pretty nice outline of what this process will look like and [00:28:00] pull together some of those details and important people to even know where to start.

I would imagine, you tell me, is there a fair amount of variability from test to test in a lot of those details: budget, timeline, departments involved, data collection, or is it pretty consistent? I’m not sure.

Dr. Carrie: We have a general framework, our SOP, that we worked really hard as a team to create over the last few years. So I think there’s probably, we can agree on about 20 or so major steps. I believe that every project, regardless of the type of project, would have to go through, but each of them will vary in terms of the type of project it is.

For example, if a project involves data collection and it’s brand new, that may have a lot more steps than a product that we are maybe transferring from prints to digital. And those would have a separate set of [00:29:00] guidelines. So, it really varies, although we have a structure that we generally follow. And each project brings up its unique challenges. And we, as I mentioned, start with a plan but as we get into it, there are maybe other pieces to consider or other roadblocks, maybe that we stumble across.

So, for example, if we are designing a test that we’re going to measure reading in Spanish, for example, well, that requires a whole set of other factors. And so now we need to make sure we have a set of translators and we’re working with other experts in that field that can help us with those unique challenges for that particular measure.

Dr. Sharp: Sure. So 20 steps in the SOP. What are the major headings there? Do you have major stops on the roadmap in terms of the stages of product development that we could group [00:30:00] them into?

Dr. Carrie: There’s a lot and we could probably spend months and months and months talking about this. And as we’re onboarding new people onto the team, we spend a lot of time internally getting the staff up to speed and then things always end up tweaking a little bit. But generally, if it is a brand new product, we would say item development is really the 1st phase of that where we’re developing the items, proofing the items. Maybe there are some tests if it’s performance space, we would have to create some stimuli. So then we would need that.

We often will have a kickoff meeting with cross-functional teams at PAR and so that may involve our marketing, sales, some admin staff and just really give a scope of the project. That way, everyone in the organization knows what it is that we’re working on. They might not remember all the details, but they’ll at least know that, oh, this is one [00:31:00] of the many projects that are out there.

Then there’s a data collection phase. We may have 1, 2, or 3 rounds, and that also varies depending on if we’re doing online data collection or print data collection.

Let’s see. Another major process would then be our intake and data entry process where we are getting all of that data back and then putting it into our systems. Then, of course, we have data analysis. And so keep in mind, if we’re doing several rounds of data collection, we have to go back through the intake process and the data analysis again and again.

Then we have our forms that we would develop. So if it’s paper, it would be paper forms. If it’s digital, it would be us working through the PARiConnect platform and getting that aligned to how we want it.

Next we have our pre-requirements phase. That’s the phase where we would do some mock-ups [00:32:00] and work through the different report options: if it’s digital, what it would look like on the screen, and have all of our workflows set up. Then we go through a few more. This is a lot to take in, by the way.

We have a manual or a technical paper. So if it’s a huge project, we would have one of those big manuals that we would love everyone to read. I know not everyone does, but it’s really important for describing how the tests are developed and used. Sometimes for the smaller tests that we may put out, or if we’re doing a small revision, we may develop technical guides or supplements. So then that would be another part of the process.

Then if we jump back into the digital piece of it, there’s all of the requirements then that the digital team and the developers will work through. And then there’s lots of editing and QA that go back and forth with all of that. And then finally, it can get to the [00:33:00] programming stage.

We operate through sprints. And so we would put the projects in what’s called a sprint- that’s just a chunk of time to do work. We operate in 3-week sprints. And so, that’s when they’ll have programmed everything, worked through all the bugs. That can take months and months for some of our projects. There’s a lot of programming that goes through, so it’s not really instantaneous.

And then there’s just a lot of editing and beta testing then. We want to make sure that we do an internal beta review, and that we are making sure internally that the product is doing what we want it to do, that the manual reads the right way, the forms work the right way, that when we’re on the PARiConnect platform, everything is going as planned and then we’re getting feedback internally. And then we also have expert beta reviewers that do that as well.

Once all that goes through, just a few more steps.

[00:34:00]Dr. Sharp: I am with you.

Dr. Carrie: We have comprehensive review that we do internally and so it’s a cross-functional group and we’re doing the final checks, making sure all of the programming changes have been made, all the editing changes have been made. We want to give it a stamp of approval. So we all have all eyes on it. QA we’ll look at it again. And then it gets to the point where we have a product release and then we officially let it out out into the world.

By that time, we’ve been so ingrained in the product that we’re also working on other things that by the time it’s out there and being released, we think, wow, oh, my gosh, I did this so long ago and now I’m onto the next 5 things.

Dr. Sharp: Right. That’s when you truly celebrate, I hope. Is there a celebration when these products are released?

Melissa: We celebrate. There’s I think a little [00:35:00] anticlimacticness to the R&D group as it gets out of our hands at the end. It’s off at a printer or it’s being released by our dev team.

And so, we are excited, but I think for us, we actually celebrate a bit more on, although we are for profit organization and revenue is important to help us fuel the funding of additional projects, part of our ethos of the company is always been wanting to make a significant impact on society.

And so, my celebrations, I won’t speak for Carrie or Kathryn, but are really around when we start seeing the uses of the product, because for each of those, you can consider that a child or an adult or a parent, or someone in the world who is getting the hopefully care that they need in order to investigate what’s going on.

One of my favorite things about [00:36:00] assessments is that it really is this data that’s used at all these critical time points in someone’s life. And so seeing those numbers to us, we really review those and think about just how many people the cool work we’re doing is having an impact on.

Dr. Sharp: Yeah, I can only imagine how rewarding that’s got to be to see it go out in the world and people appreciate it and clients get help from it. That’s pretty special.

Talk to me about the timeline a little bit. There are so many questions in the Facebook group and in the testing community, like what’s taking so long with the new version of blank or why don’t we have a test for whatever? Obviously, that description you just gave, Carrie, lends itself to a ton of attention and detail, and meticulousness. Can you give me, I know it varies, but what’s the timeline from idea, let’s just say proposal [00:37:00] acceptance to publish and release?

Melissa: I’ll give my answer and I just know how different it is from everyone else’s, but it does vary so much because the nature of our projects vary, but I would say on average, it’s between 3 to 5 years from a real start to finish development. I can understand too. I do want things to come out quicker and faster.

I think one of our biggest challenges and it will continue to be is around data collection. The time that it takes to collect valid cases that represents, we use typically the census, which is the ever-changing target. So we’re always chasing after a robust sample that’s representative and valid. That is the linchpin around a lot of it.

And then I would say, some people [00:38:00] don’t think through some of the logistics, but on an achievement test where you’re testing in fall and spring, if you don’t finish your sample in fall, you’re waiting till next fall to get the rest of that sample. So that’s just a year till you have to wait for the turn around for the calendar to roll over.

So it’s things like that, logistical things, making sure kids aren’t about to age up and you put them in a sample when their birthday is actually tomorrow. Real detailed stuff about the kind of cases. We were very sensitive during COVID of not wanting to have all the tests that we were developing at that time be collected during what was going on in the world, and how long do you wait till after? So I think data collection is one.

And then I would say one of the unique features of PAR is we’re trying to maintain in this ever-changing world flexibility to how people can give assessments both on paper and digitally. And so we’re really creating two [00:39:00] products at once most of the time. We have some that are completely digital, but that creates, as Carrie described to you, we’re actually running 2 projects simultaneously and trying to publish them at the same time with different kinds of teams and groups. So, that adds to it.

But I would say, our challenges are no different than others. I’ve heard other publishers on your podcast. I think we all are in it to create the best test that we can that are valid and reliable. And it truly does take time.

Dr. Sharp: Yeah, of course. I’m glad that you brought up this idea of paper versus digital. I think it’s easy to overlook that it’s two parallel development paths even just for the same measure. I think a lot of people too would assume that digital is easier and I don’t know if that’s true. [00:40:00] Could you speak to that at all? That’s a very open-ended vague question, but I’m curious where we might take it.

Let’s take a break to hear from our featured partner.

We hope that you’re enjoying today’s podcast. I want to let our listeners know that PAR has a program that offers incentives to data collectors. Plus once you’ve participated in a project, you’ll continue to be notified of future data collection opportunities. If this interests you, go to parinc.com\resources, then just scroll down and click on the partner with PAR to learn more.

All right, let’s get back to the podcast.

Melissa: Yeah. There’s the development of the content irrespective of what platform it’s going to go on. That’s particularly true for rating scales. So, you’re creating items and the way in which you’re going to administer them, whether it’s paper [00:41:00] or print or we have a hybrid or you may give it in person and then you hand in through the floor. So that is a bit simpler in terms of doing those things simultaneously.

It’s a bit more on the performance-based side. We wanted to create a situation where you have equivalency between what you’re creating, wanting to have synergies between the two, but I think one of the biggest challenges is that we have to finalize our test decisions- the norms and all the details. We have to get to a certain point before we can start on the digital piece. And so there’s a long runway leading up to when you have a hybrid project that you’re doing both. Whereas when you’re creating just digitally native, you’re not held by anything but the platform you’re creating on. But it brings with it a whole host of other challenges.

So I don’t think one is easier than the other. I think it’s more [00:42:00] being conscientious that being able to have those flexible options. The way I’ve always said it is, when I was practicing, I wanted to see the human who was sitting in front of me and decide the modality that I wanted to test in, not that the test is dictating or that. Oh, well, I’ve decided to get a digital symbol. Why? This kid may not do well with that. So I love our strategic focus of flexibility.

The hard part is keeping those things equivalent, developing them all, and maintaining them all at the same time from a test developers perspective. But I think from an end user’s perspective, people should expect to have those options. Different practice settings, and different kinds of situations do really demand that kind of flexibility.

Dr. Sharp: That makes sense. I’m glad you clarified that. I think when I said digital is easier, I was definitely referring to questionnaires or rating scales[00:43:00] if we’re talking about cognitive measures.

Melissa: They’re two different things.

Dr. Sharp: Yeah, certainly. I wonder if we might talk just briefly about the economics of all this before we dive into data collection. Kathryn, you’ve been waiting very patiently to shine on your data collection role, but I’m curious about the economics of this both from the author side, if there’s anything you can say about that, and in the consumer side, because, just speaking for myself, I see rising costs for we use….

Everything is digital at this point. And so, there’s a part of me that is like, why is this cost so much? We’re running the same software over and over. It’s not like you’re printing materials. Why am I paying so much for all of that? So maybe we start there. And then if we can, I’d love to hear about the economics for authors as well if [00:44:00] you can share that.

Melissa: I love this question because I feel like it’s probably one of the hardest for our customers to understand. We have a lot of expenses as Carrie talked through with you all of the phases and steps. We have a huge expense to create a product from start to finish and there’s a lot of risks that go into that. We’re developing, especially on a new product, having no idea what the customer reaction might be. We hope for the best. We’ve gotten to the point of doing market research and all those things, but in essence, we’re taking a lot of really big risks and in the work and commitment of our time, even of that 3 to 5 years of not working on something else.

So the expense is really high. And so when people even look at a paper form and say, gosh, this cost me $2.25 per administration, they’re I think, [00:45:00] oftentimes thinking about the cost of the paper and not the tons of outlay of expense that was put into collecting that 88-year-old hispanic female with less than 11 years of education. Kathryn is smiling and nodding. We have to support the collection of those cases with providing funding to both the data collector, but also the examinee. So we have all those expenses.

We have the expense of paper, which for those of you not publishing, paper costs have gone up significantly post-COVID, and just the ability for us to actually access paper, it’s a whole new ball game. Our warehouse that I can see outside my window is huge and we support keeping the product here and test security, copyright, all of those things that we have to maintain and keep ahead of and keep control over, that’s the [00:46:00] paper world, if you will.

On the digital side, I think oftentimes people are under the impression, as you said, that I’m just clicking the same button every time, how can there be an expense? The expenses are actually much higher in my opinion. We have a team of developers that have more work than they can ever probably get to, but our IT expenses are significantly increasing through the years that I’ve been here.

We went from desktop software, which I said before was one and done and didn’t require constant maintenance, a digital platform, one that is HIPAA-compliant is secure, all the security and concerns that go around with that requires 24-hour constant maintenance, constant updates to keep not only it operating and having correct scores come out and all that, but it’s being reliably [00:47:00] available.

We have tons of customers all on at the same time, maybe at 1 o’clock in the afternoon, but we also have… Australia is on the opposite time of all of us. And so there are people on our platform all the time.

So those expenses, I think, are the ones that are confusing to people, but it’s really all those development expenses are still there, even though it’s on digital and we are able to provide more updates on digital and push more new content out more quickly. So, hence, I think some of the expense differential that can be seen sometimes between print and digital.

We make every effort to have those expenses be as close to parity as possible. It’s just not always 1:1 ratio. We don’t have a paper interpretive report. So there is an additional expense for purchasing that report because we don’t have that equivalent over here. So parity, we do our best to not have an additive expense based on the modality [00:48:00] that you’re giving the test in.

Dr. Sharp: That’s fair. Thanks for talking through that. That does make sense to me. And when you outline all those factors, I can understand. It’s of course reductionistic to say that it should be cheaper just because it’s software. So this is how…

Melissa: We understand all the rising expenses. We’re very acutely aware. We, as Carrie said before, are always listening. We can talk focus groups. We’re always asking for feedback. We’re doing tons of surveys and we do hear where we might have hit the ceiling a little bit in our minds of recouping our expenses and understanding that maybe we’ve missed the mark a little bit in terms of how the test is used, or the number of raters that might be required for one test versus a different test. And so we do take that into consideration.

We evaluate our prices every year and [00:49:00] again, try to have a very fair and balanced keeping in mind as I said before, what PAR’s top number one rule, be kind, do good. That is what we strive for. And we want those assessments to get into the hands of and be out there in the world. So we don’t want to put up a barrier that is going to keep people from pricing them out of using it. We understand the real-world logistics around that.

Dr. Sharp: Great. And then, just briefly before we transition to data collection, just from the author’s side, I’m so curious about this. Like if I were to somehow develop a test, how do I get paid for that? Is it a royalty situation? Is it a licensing situation? Is it some other thing that I don’t know about? How does that work?

Melissa: We work collaboratively with our authors to develop an author agreement at the beginning of the project, as Carrie said, and we [00:50:00] determine, depending on the level of work that the author is doing versus what PAR is doing, a negotiated royalty payment with a lot of variables that go into it and then there are some industry standards.

But in general, our goal is to provide authors with… They’re taking time out of their day job to do this work and we’re providing a fair compensation for that. Clearly some products so much higher than others, and so there’s variability to what they end up receiving, but I think we do a really good job of again, we’ve attracted some amazing authors and repeat authors time and time again. And so I think we’ve done a good job of finding that perfect balance for each test what makes sense in terms of who’s responsible for doing what?

Dr. Sharp: Yeah, that makes sense. Well, let’s talk about data. This is important. Like I said, Kathryn, you’ve been very patient. 

[00:51:00] Kathryn: Patience is key with data collection. I’ve learned.

Dr. Sharp:  Oh, this is good. That’s good. You have a lot of practice. All right.

So yeah, let’s talk about data collection. How does that start? Where do you go? Just tell me anything you’d like to about data collection.

Kathryn: When Carrie was talking about the general SOP of projects that we are developing here, usually the conversations about data collection happen pretty early on on that.

So usually the project director would come to me, catch me up to speed on the project, the idea, and their ideal sample. So, in terms of how many people they want, what populations, what clinical groups, that sort of thing. And we work back and forth about feasibility about that, any challenges we foresee.

In a recent example, a project director mentioned wanting a complete diet for every case that we get in. So that would be like a self and informant pair for each participant, and so one thing I [00:52:00] brought up was what happens when we get incomplete pairs? What if someone doesn’t finish their data? And so it’s those conversations back and forth of how can we proactively problem solve some things that might happen in data collection.

From there, we start discussing payments to examiners or payments per case that examiners may turn in for data collection. So, I would say, usually the factors that are considered there would be the administrative burden to the examiner, how long the test takes to administer, the difficulty of the population accessing the particular populations we need, and then also exclusionary criteria for the participants: How difficult is it to find people who will qualify for this study?

So the project director and I will suggest what we think is fair compensation. We’re not typically the final decision-makers on that, but we’re passing it forward. Once the budget and everything is approved, it’s on me to start developing and preparing for data collection.

Usually, the 1st step, I [00:53:00] would say is taking the census data and turning it into people that we need to collect data on. So, I would call it cells or spots. I usually refer to spots that represent each participant that we want to collect. So we typically stratify our samples based on age, sex, race, and ethnicity, and educational attainment.

So we take the census data and make an actual graph of each person or spot we want to collect in our sample. And so once we begin collection, an examiner or a data collector would look at that list and see these as people that they have access to based on these demographics. So they may see a 45-year-old white male with college education and say, I have access to someone like that. I’m going to reserve this case. Reserve is a word that I use that means I intend to collect this case.

[00:54:00] Filling those spots, I think becomes more difficult the longer data collection is going on because it becomes very specific. So our examiners are used to getting emails from me that may say, do you have access to a 4 year old black female with parents with a high school education, so very particular spots that we need at the end. And as Melissa was mentioning, trying to fill those spots to get our ideal sample can really prolong data collection both in time and cost.

Dr. Sharp: That makes sense.

Kathryn: Yeah, sorry to interrupt. Once we have that sample, I typically reach out to examiners or data collectors that I know either may have access to those populations, or they’ve collected data for us recently, or have just expressed interest in data collection, and then they go forth and recruit research participants on our behalf.

Dr. Sharp: That’s interesting to me. I don’t know if I’m going to characterize this the right way, but it’s almost like, it sounds like y’all take [00:55:00] a bottom-up approach where you know the individuals who you’d like to have in the sample and then go find folks who have access versus a top-down approach where I would almost cast a wide net in a specific geographic area and say, test everyone in your practice or try to collect. Is that fair? Is that a reasonable understanding of that, like, you are actively trying to fill very specific spots in the sample?

Kathryn: Correct. And I would say there is some of, in the beginning of the product, there’s certainly more flexibility on who examiners can test. So it is a wider net first. But, like I mentioned, the closer we get to being completed, it’s very specific spots that we’re trying to make sure that we represent in our sample. So that’s where there’s a lot of active intentional recruiting of how can we find people who match these demographics so we can make sure that they’re represented?

Melissa: And I that’s really changed. I’m sorry. I think that’s [00:56:00] really changed in for all the test developers, but I can speak to when I started here, I think there’s some great foundational tests that were great tests that everything about were great, but if you really looked at it, it was not very representative.

I’ll use one of our flagship products, which is the BRIEF. Its original sample was collected generally in the North East area and certain schools by the authors. Does that represent California or Texas? And so our revision then addressed that.

So there are a lot of tests out there that, older tests that at the time they were collected were more convenient samples and they’re still great tests but again, as we continue to know that there are differences, whether it be sex or age or parent education, we want to keep improving these tests. And so, hence, the more standard methodology now is to be that precise. The census is evolving so [00:57:00] much and we could debate the accuracy of the census, but we’re trying our best to have the most diverse sample that we can and the most representative so that the results are the best they can be.

Dr. Sharp: Yeah. There’s always a lot of talk, but I think more over the last several years about inclusion and the norm sample and so forth. And so, I’m glad you brought that up. I’m curious how y’all tackle it when you’re having trouble accessing a lot of groups that tend to historically get left out of the standardization whether it be, I don’t know, rural individuals or certain marginalized groups or any number of other folks. These days it’s maybe non-binary, gender-diverse individuals. How do you tackle that and actively try to include those folks?

[00:58:00] Kathryn: I think one key factor is having as big of a pool of examiners to pull from as we possibly can. So we are always, and I am always recruiting people who are interested in data collection with us to just have their contact information on file. And I do ask upfront, are these populations you have access to so that we have a little bit of guidance when we need to find those particular populations?

So, the majority of examiners, I would say, initiate contact with us. They demonstrate interest either through a form on our website, or maybe they see our information through a PARtalks Webinar. But when we are getting a trickier population to access, I do more intentional recruiting. So I may be researching community groups. We’ve collected in local schools. We’ve collected at other local agencies that have more of the populations we’re needing or other agencies across the United States.

So it’s really reaching out and seeing who has access to these people. Are there any researchers in the field who are [00:59:00] working with these types of populations that would like to do a data agreement with us or partner with us in some way to get this data? But it is tricky.

As Melissa mentioned, I think one of the challenges in data collection is the population of the United States has gotten more educated over time. So it’s proving quite difficult to get people with lower education levels, but we still need to represent those people. Obviously, it’s very important.

And I think that’s another challenge with online data collection. By that I mean, both completing your survey or rating scale online, but also when we recruit online, the positive of that is we can recruit a much more diverse sample from different locations, but the negative of that is you may be excluding people who don’t have access to that type of technology or don’t spend as much time on technology to even see that this study is going on. They don’t know the opportunity to participate.

So, I think there’s a fine line of online data collection and the challenges that we see with that.

[01:00:00] Dr. Sharp: Yeah, that’s fair.

Melissa: If I could just add, I think it’s a great plug to the listeners of these end users of the product, we need people to participate in the data collection efforts and I would say, especially for those who work in those special populations that they want to see represented in our products. I think we hear the criticisms and we’re well aware of them and it’s not for a lack of trying, but it does really require us as a field to come together and support the notion that if you want those samples represented, we really need access to them, and we need people’s help.

And so it’s been amazing to see the number of examiners. We have some examiners that have collected for us since I’ve been here, so all 20 years, that have stayed part of that [01:01:00] community and contributed. I think that’s very exciting for their surrounding communities that they’re being represented. We just need more of that in order for the products to represent, especially some of those smaller, more marginalized populations.

Kathryn was really referring to our normative samples. We didn’t even dip our toe in getting access to clinical samples, which is a whole other host of challenges and barriers. We use other means and methods to also get those. But again, I think we want to reach out to your listeners and our customers. So we’ll be considering, even if it’s 5 cases, those 5 cases maybe would not have been represented if they didn’t contribute to our data set.

Dr. Sharp: Of course. I appreciate the distinction between the normative sample and the clinical sample. That’s a huge issue. [01:02:00] Then there’s so much to get into in terms of who has access to mental health services. Just barriers around that.

What are some of the current data collection projects that you’re engaged in?

Kathryn: That’s a great question. Thank you for asking. Within the next month, we’re going to launch two data collection projects. The first will be the NEO-PI-3 normative update. The NEO is authored by Dr. Paul Costa and Dr. Robert MccRae. It’s a personality assessment built on the 5-factor model of personality.

For this data collection effort, we’re going to be asking examiners to recruit participants aged 12 to 99, as well as a close contact of that participant- so a friend, family member, or someone who can serve as an informant on the participant. The participants will be asked to complete an online NEO self-report and an informant report. The survey could take up to an hour [01:03:00] for each participant to complete because they’re really informing on their self and someone else.

To be eligible to participate in that project, all participants must provide consent or be given parental consent and be able to read proficiently in English. But beyond that, there’s not really any other strong exclusionary criteria. So that’s a pretty good project to get involved in if you’re an examiner or data collector because it’s like you said, casting a wide net of participants. Most people are going to be eligible to participate in this project.

So we have that coming up and we’re also going to be collecting some new data on the Reynolds Intellectual Assessment Scales, the 2nd edition or the RIAS-2 which is authored by Dr. Cecil Reynolds and Dr. Randy Kamphaus. We’re going to be doing an exploratory study into whether or not norms have changed since we’ve published the 2nd edition.

So this is an in-person performance-based test of individuals aged 3 to 94. Examiners for this [01:04:00] project will be asked to recruit, administer and score the measure on participants that we need based on a US census-based sample. Preference will be given to examiners who have experience with the RIAS-2 because it’s been out there for a while, and I know some people use it frequently. So it would be much less of a learning curve for those examiners, of course.

The administration of the RIAS-2 plus the consent and demographic questions needed for data collection will probably take about an hour of the examiner’s time. And we will be asking examiners to complete a minimum of five RIAS-2 cases if they want to get involved.

So those are the most pressing projects we have, but in the fall and winter, we have a few more that we’re scheduled to launch, which will be standardization on the Trauma Symptom Checklist for Children and the Trauma Symptom Checklist for Young Children which is by Dr. John Briere. For that project, we’ll be recruiting participants without a known trauma [01:05:00] history as well as clinical samples or people with a known trauma history or other mental health diagnoses.

And then, we’ll be standardizing the judgment of line orientation and line length, which is authored by Dr. Cecil Reynolds and Dr. Robert McCaffrey. So this is a digital performance-based exam of lying orientation and lying lengths on a digital platform, and we will be collecting standardization data across the lifespan.

For this project, examiners will be asked to recruit, get consent, get demographic information on their participants, and then proctor an online session. It uses audio administration. So the examiner is a proctor while the examinee goes through the test items.

So we have a busy year planned and data collection. As Melissa said, we need as many examiners [01:06:00] as we can get. So if you’re interested, please contact me.

Dr. Sharp: Yeah. I think we’re going to spend a little bit more time on that before we wrap up- how folks can participate and engage in this. But gosh, I just want to give a shout-out to a digital version of the JLO. That’s. Great. How long has it been since those norms were updated? It seems like it’s very old.

Melissa: If I had to guess, I don’t even know, but I’m going to say 70 to 80.

Dr. Sharp: Oh, my gosh. Yes. 

Melissa: We’re really excited about it. It’s the first test that we built completely digitally native on a data collection version of pack of our platform. And so it’s the first one and it’s got some cool features with the audio pieces. And so we’re really excited to go into standardization with that.

Dr. Sharp: That is cool. So what’s the timeline on that? 2years, 3 years?

Melissa: It’s a little hard to say, [01:07:00] but I would say we’re closer to within the 2-year period than the 3-year period.

Dr. Sharp: Okay. Selfish question. We use it in our practice. So, very curious. 

Melissa: We’d love to have you collect some data for us.

Kathryn: Yes.

Dr. Sharp: Okay. Here we go. Worlds collide. Yeah, let’s talk about it. So let’s talk about the interaction of practitioners or examiners and PAR. What are the ways that we as practitioners can partner with y’all?

Dr. Carrie: Oh, we would love to have as many people as possible partner with us. We really value our customers’ input and clinicians. All the work that all of you are doing in the field is just so important and impactful to people’s lives. And in order for us to continue to develop great products, we need your support and input and just really ways to help shape the future of psychological assessments.

And so there’s a variety of ways [00:08:00] that you can do that. When we have calls, not only for data collectors, like Kathryn had talked in depth about, that’s always a great way. Additionally, we will put calls out for experts and bias reviewers for different products that we are working on, especially in early phases of test development of their particular constructs we’re studying, we may need to seek expertise.

For example, if it’s a trauma product we’re working on, we may seek out some trauma experts and rely on them to help us. Then also, when we look at the Beta reviewer process, that is another opportunity for reviewers to really help us out there. We want people that are in the field that are going to be using these products or have used similar products in a variety of settings to try out the products before we release them out to the world.

And [00:09:00] so I think it’s exciting because it gives people the opportunity to have a sneak peek of what we’re going to be releasing and it also gives them a chance to give input to maybe refine the product in a way that we haven’t because while a lot of us, or some of us in the organization have been practicing clinicians, we’re not always in tune to the day to day because we’re not out there regularly doing it. So that is why it is so important for those that are administering the tests every day to really help us shape these products.

Additionally, throughout their PARtalks, we have the PARtalks webinars that we do that have really gained a lot of traction. Lately, we do a lot of topics in terms of talks about what’s going on in the field and how to use some of the assessment instruments. And we really love to partner with clinicians and [01:10:00] other experts in the field to speak on some of these topics also related to mental health as well. And so, if anyone out there is really interested and has expertise in a particular area of mental health or psychological assessment, we would love to hear from you and talk about getting you on our PAR talks webinar schedule.

Let’s see, we also talked about test proposals early on. So that’s certainly another way. And let’s see, the last thing I can think of is market research.

So, from time to time, we will send out customer surveys to get people’s input on some of the products that were released, or maybe we’re trying to figure out some new test ideas and would like some input, or we’re just trying to figure out what are some trends in the field and would like feedback on that, and then as well as feedback in terms of how we can improve and provide you the best [01:11:00] products and the best service and help you use these products in the work that you’re doing every day.

Dr. Sharp: I love that. And when you say reach out, how do people very practically get in touch with you if they’re interested in any of these things? Is there a form on the website? Is there an email address? How does that work?

Dr. Carrie: There’s several different ways. If you go to our website, there’s some information on there. In terms of our customer service contact, if there’s questions directly about the products and how to use them, you can reach out to them there. We also have some information about data collection and how to reach out to Kathryn, particularly regarding that.

And Jeremy, we can share with you a number of links so people can have access to those in the show notes, and then just click on those various links and they’ll take you to the [01:12:00] right contact or the particular form that we have. We have certain forms for test proposals, for example.

Dr. Sharp: Great. If there’s any episode to pay attention to the show notes, this is a good one. Folks, make sure to check those out and get in touch if you’re interested in partnering.

This is super illuminating for me. Just working backward, the partnering options, there are a lot more of those than I have really imagined. And I’m guessing some of those are a little bit of a surprise for members of the audience as well. It’s great to know that we can be more involved because I think a lot of us, it is hard, it’s like being in a car that someone else is driving and it’s like, when are we going to get there? I don’t know. It feels like a black box sometimes. So to know that we can jump in and be part of the process gives a little bit more sense of control over [01:13:00] test development and where we’re headed with all these things.

I can’t really say enough thanks for y’all to dive into this whole process and pull back the curtain a little bit on test development. I’ve learned a lot during this hour. Thank you.

Melissa: Thank you.

Kathryn: Thank you for having us.

Dr. Carrie: Thank you.

Dr. Sharp: All right, y’all. Thank you so much for tuning into this episode. Always grateful to have you here. I hope that you take away some information that you can implement in your practice and in your life. Any resources that we mentioned during the episode will be listed in the show notes, so make sure to check those out.

If you like what you hear on the podcast, I would be so grateful if you left a review on iTunes or Spotify or wherever you listen to your podcast.

And if you’re a practice owner or aspiring practice owner, I’d invite you to check out The Testing Psychologist mastermind groups. I have mastermind groups at every stage of practice [01:14:00] development: Beginner, Intermediate, and Advanced. We have homework. We have accountability. We have support. We have resources. These groups are amazing. We do a lot of work and a lot of connecting. If that sounds interesting to you, you can check out the details at thetestingpsychologist.com/consulting. You can sign up for a pre-group phone call and we will chat and figure out if a group could be a good fit for you.

Thanks so much.

The information contained in this podcast and on The Testing Psychologist website are intended for informational and educational purposes only. Nothing in this podcast or on the website is intended to be a substitute for [01:15:00] professional, psychological, psychiatric, or medical advice, diagnosis, or treatment.

Please note that no doctor-patient relationship is formed here, and similarly, no supervisory or consultative relationship is formed between the host or guests of this podcast and the listeners of this podcast. If you need the qualified advice of any mental health practitioner or medical provider, please seek one in your area. Similarly, if you need supervision on clinical matters, please find a supervisor with expertise that fits your needs.

Click here to listen instead!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.