163 Transcript

Dr. Jeremy Sharp Transcripts Leave a Comment

[00:00:00] Dr. Sharp: Hello, everyone. Welcome to the Testing Psychologist podcast, the podcast where we talk all about the business and practice of psychological and neuropsychological assessment. I’m your host, Dr. Jeremy Sharp, licensed psychologist, group practice owner, and private practice coach.

This episode is brought to you by PAR. The TSCC and TSCYC screening forms allow you to quickly screen children for symptoms of trauma. Both forms are now available through PARiConnect-PAR’s online assessment platform. You can learn more at parinc.com.

All right y’all, welcome back to another episode. Hey, today’s episode is pretty incredible. I know that I say that a lot, but I think they’re all incredible, and this one is particularly incredible. Today I’m talking with Dr. April Foreman and Dr. Tony Wood all about their use of artificial intelligence in suicide risk assessment. [00:01:00] So if you find yourself thinking, I have no idea what that even means, that’s totally okay. That’s a big part of what we cover in the episode.

So we talk about what artificial intelligence is in this context. We talk about how we as clinicians have basically been getting suicide risk assessment wrong for all these years. We talk about the ethics of using artificial intelligence in this research, touch on some cultural differences in language that predict suicide risk. And at the end, we get into a little bonus discussion on the possibility of uploading our consciousness to some sort of AI system in the future.

So this is action-packed. I have a ton of resources in the show notes. And I think you’ll see very quickly that April and Tony are so knowledgeable and so passionate and excited about this work that they’re doing. It is truly inspirational. So [00:02:00] stick around and check this one out.

Let me tell you a little bit more about them just so you have an idea of the guests today.

April is a licensed psychologist. She serves Veterans as the deputy director of the Veteran’s Crisis Lines Innovations Hub. She’s an Executive Committee member for the board of the American Association of Suicidology. She served VA as the 2017 Acting Director of Technology and Innovation for the Office of Suicide Prevention. She’s a member of the team that launched OurDataHelps.org, which is a recognized innovation in data donation for groundbreaking suicide research. We talk about that a lot today.

She works with people with severe emotional pain, advocates for folks with Borderline Personality Disorder. She is known for her work at the intersection of technology, social media, and mental health with nationally recognized implementations of innovations in [00:03:00] the use of technology and mood tracking. She’s also a recipient of the Roger J. Tierney Award for her work as a founder and moderator of the first sponsored regular mental health chat on Twitter, the weekly Suicide Prevention Social Media chat. Got a link to that in the show notes.

So April’s dream is to use her unique skills and vision to build a mental health system effectively and elegantly designed to serve the people who need it.

Now, Tony Wood is the COO of Qntfy. He’s also so a founder of the Suicide Prevention and Social Media Chat on Twitter, which was the largest and most engaged social media community dedicated to connecting a variety of professionals including those with the lived experience of suicide, with the best and latest research related to suicide and crisis prevention. He has worked with the social media aspects of suicide prevention as a founder of the social media team at the American Association of Suicidology annual conference [00:04:00] earned him 2015, Roger J Tierney award for innovation.

His research intersection of social media and mental health has been published in a number of professional journals around the world. And as a result of this work, he has become a sought-after resource for mental health professionals, private companies, and organizations interested in the intersection of new media, mobile data, and mental health.

So like I said, as you can tell from their bios, these folks they’ve just been doing it. Doing the work for many years and they’re doing it so well. And again, the energy that they bring to this conversation is completely infectious. So tune in and enjoy my conversation with Dr. April Foreman and Dr. Tony Wood.

[00:05:00] April,Tony. Welcome.

Dr. Tony: Hey Jeremy, how are you?

Dr. April: Thanks for having us.

Dr. Sharp: Yes. Thanks for coming on. I was thinking this morning, I don’t know that I have been… I think I’m more nervous for this podcast interview than I have been for any other in a long time. Not because I see your face. Nothing because of y’all, aside from the fact that you’re just like…

Dr. April: Who knows what can happen today?

Dr. Sharp: This is true. It is Friday the 13th. We’re recording on Friday the 13th. Things are a little wild. But yeah, y’all are clearly experts and rock stars in this area. I am just so excited. I’ve been thinking about this for a long time and what this interview might be like. And I’m afraid I’m not going to ask the right questions or [00:06:00] get the information that I am excited about. There’s just so much. I feel like I need to put that out there.

Dr. April: I feel like we’ll make it fun. And if we aren’t fun enough. Folks at home who are listening, if you wanted to hear more, if we didn’t get to something really crazy or cool, maybe we’ll ask for a follow-up episode and we’re down for it.

Dr. Sharp: I love it. All right. You heard it here.

Let’s just dive in. I know from the introduction that people are probably already just like, what are we even talking about here when we say AI, research, social media, and assessing suicidality through those means? Let’s just start with a big-picture overview. What do all those words even mean? And how are y’all doing this in your lives?

Dr. April: Our mutual friend, Dr. Rebecca Resnick, often has broken it down so well talking about how [00:07:00] fancy people like her husband, Dr. Phil Resnick, who are linguists can take our language, and also there are wonderful data scientists and then other people that we worked with who can take a lot of digital data points and some of that may be language and they can turn those into very large data sets of a lot of people. And using big data processes and data science processes, they can predict people’s suicide attempts in many ways months before it happens.

And there have actually already been several successful attempts at this. So we know that we can assess suicide risk and we can predict attempts and deaths months ahead of time now in some cases using these very complicated big math procedures. And so [00:08:00] it is not just theoretically possible. It is possible for people to then leave digital or language samples, and for us to know something about their suicide risk and their mental health. And did I say that okay, Tony?

Dr. Tony: It’s not magic. It’s statistics.

Dr. Sharp: I love how simple you make it sound. I think for a lot of people though, this is like the next level, the year 3000 kind of stuff as far as what we’re doing on a day-to-day basis, right?

Dr. April: Conan O’Brien moment, right? I didn’t sing the songs. It’s probably copyrighted. So here’s the deal. In suicidology, we have not made a lot of progress until very recently in data science, suicidology didn’t really do this, about understanding suicide and predicting it. And so when you Jeremy or I went to graduate school, they told us you can’t predict suicide. You can assess ambient risks. And it’s then Craig Brian has Fluid Vulnerability Theory guided that a person’s suicide risk [00:09:00] is really ebbing and flowing. And whatever you say in the office is pretty much good for that 30 minutes in the office.

And so we’ve all developed a system of care and a system of understanding the science of psychology around some fundamental assumptions about suicidality that this is really challenging. Of course, this feels really like the year 3000, but Amazon already knew what kind of laundry hamper I wanted to buy this morning. They were amazed by the $29.99 prime delivery.

And so we know that there are some people who have a real financial interest in predicting low base rate human behavior and making money off of it. And the issue is then when we apply it to suicide, we have a conundrum where people in our field often make one of two really dumb mistakes. And I’ll tell you what they are so you don’t make them because if you make it at a party, I’ll secretly judge you. And you don’t like that.

You’ll do one of two things, you’ll [00:10:00] think, this is Harry Potter magic. It’ll be like a Sims Game. Everybody’s suicidal. Have a little blue light happen above their head and we’ll just go find them and save them. Not true. Or this is total hogwash. It’s like how people felt about email in the early 1990s when we all just got it. And they’ll be like, this is just a trend. Nobody’s going to be using this. I’m just not going to bother with it.

Dr. Tony: We call it the sorry grandfather response.

Dr. April: Sorry, grandpa. And don’t do either of those things. The issue is, this is happening but there are some real realities about it. What’s likely to happen that I think is important for people to understand. But when you make one of those sorts of polarized reactions, I don’t have a lot of respect for you. And as a psychologist, we should be better than that.

Dr. Sharp: Right. So, the happy medium there is maybe just curious, right?

Dr. April: Yes, I like it.The middle path. So curiosity, ask some questions, become educated, like healthy Copers, right?

Dr. Sharp: Yeah, for sure.

Dr. Tony: The bandwagon. Just [00:11:00] pay attention.

Dr. Sharp: There you go. We’re good at that. We should be anyway? 

Dr. April: Right. These are skills to help us cope with AI.

Dr. Sharp: Right. Well, and I think putting it in a real-world context probably helps folks. And the Amazon context is a really good example. Like we’ve been, we, not me, other people have been using online data and human behavior to predict future behavior for a long time. It happens every time Amazon suggests something for you or Facebook suggests something for you. It’s just calling all of the data that’s out there and then making predictions from it. And like you said, Tony, it’s just math. It’s just complicated math that none of us can probably do, but people do it and it’s out there.

Dr. Tony: Yes.

Dr. Sharp: I love that.

Dr. April: And we can probably… I think I really liked your idea of discussing some real-world examples and what it’s going to look like. So we are now at the part where we [00:12:00] can tell people what it’s going to look like, how you might use it in your clinic, what some of the limitations are, what it will mean and not mean, I think we’re at that part, right Tony?

Dr. Tony: I think so. I mean, I think the things that we will say here will likely be laughed at by our future colleagues because we’re in that sort of 19th-century telephone area where people thought telephones were going to be really exciting and all these things were going to happen and they got flying cars and all this stuff. We got a lot of those things but it wasn’t really shaped the way that it looked to the people in the 19th century. So that’s what we’re talking about now. 

How is this really going to be integrated into the system of care? How are people going to ethically determine the people’s given state, especially in emergency cases? But the beauty of this technology is that we get way ahead of crises. A very large number of people come to [00:13:00] this pre-crime idea where you’re looking forward to somebody’s future.

Dr. April: Yeah, like minority report but for suicide.

Dr. Tony: That’s what they all think of, but it’s not the worst analogy ever, but it has some issues in that really you’re not trying to arrest a for sure outcome. You’re trying to influence future behavior. So very similar to the regular work that psychologists do. You’re not policing someone, you’re really trying to help them help themselves make a decision that gives them a better outcome.

Dr. April: Yeah, I think that’s really good. And so what we’re also talking about is, many of us, like me, have a home blood pressure cuff, but that doesn’t make me a cardiologist. I still need to go to a doctor even though I can take my blood pressure. A blood pressure problem can still mean a lot of things. And so you’re still going to need clinicians. You’re just going to need clinicians who are, I think a little bit more sophisticated about how to use tools.

So when we talk about some of these algorithms for [00:14:00] predicting suicidality, the thing that people are incredulous about is that there are several published by well-respected suicidologist and data scientists methods for predicting suicide risk, six months, one month, one week, one day before they happen or suicide attempts and deaths, like not even risk. So predicting that. And that’s pretty amazing.

What people don’t probably understand, what I think brings us down to earth is that the best algorithms for these things and some of the work we’ve done have led to one of the better algorithms. It’s still going to have a false positive rate. You’re going to have three false positives for everyone that you predict accurately. Now, that’s way better than the human error in our clinics which is really high. Not because clinicians are bad, although we’re bad when it comes to suicide and we’re pretty untrained like ground truth before technology, [00:15:00] 90% of clinicians couldn’t pass a basic competency exam on assessing suicide risk and doing an evidence-based intervention. 90% of licensed clinicians going to do that. That’s just us with licenses.

But what we know is that with assistance from these algorithms, you could get much better. And for the folks who are false positive, even if it’s all you’re false positive for a suicide attempt in six months or in one week, that doesn’t mean it will happen. But those odds of 1 out of 4 times are much better than what we have now. And folks that we’re finding with false positives are still in a tremendous amount of pain and not doing well. So, if I go to my doctor’s office and I have high blood pressure could mean I’m going to have a heart attack. It might mean a lot of other things, but it means something’s probably not good and my doctor should follow up. And that’s what this stuff is capable of.

Dr. Tony: And the blood pressure cuff [00:16:00] lowers your risk of a major cardiac event. That’s really what it does ultimately. It’s way upstream of that. But that’s precisely what we’re trying to do with these algorithms. Give the clinicians the tools that they need to make a better decision faster so that the patient and the individual and the client and their families and their friends can all participate in having them never see another crisis event.

Dr. Sharp: Of course. So I think we’re laying some really nice groundwork for this. And hopefully, folks are with us through this discussion so far. So the idea that we are or y’all, I keep saying we, I do not want to include myself in this amazing work, but y’all are…

Dr. April: You’re here now.

Dr. Sharp: Thank you, April. You’re so welcoming. But y’all are doing this work where you can somehow pull that out from social media posts and other online sources maybe, but primarily social media and [00:17:00] run it through this complex math and predictive software algorithms kind of thing to figure out someone’s risk for suicide at a certain period of time in the future. Is that a fair summary?

Dr. April: So we want to add one piece to this, which is ethics, right? Is that what we want to add?

Dr. Tony: First, I want to add about the data source…

Dr. April: Okay, and then we’ll talk about ethics.

Dr. Tony: And the reason why I want to focus on data sources a little bit is that people become very focused on this pretty fast. So they say, well, what data are you bringing in because that’s the basis of all measures, like how you do it, how’s that applied? And where does that information come from?

Social media data is of course a primary source because it’s a very rich source of language. However, there are a bunch more data that I would call exhaust from your cell phone, but we call it digital life data. So everything that you do that’s digitally mediated can be input into an algorithm and taken into [00:18:00] consideration by these machine learning models because you don’t have to have a human being that’s deterministically building a set of rules. You’re using statistical tools to build those models dynamically as you roll along for an individual person down to the individual actual Sally Jones.

Dr. Sharp: Tony, can you give me examples? When you say exhaust from your cell phone, what does that mean exactly?

Dr. Tony: All the things you do on your cell phone. Everything that everybody does on their cell phone. What games do you play? What time do you get up? How many calls a day do you make? When do you send emails? When do you not send emails? When do you turn your phone on? When do you have your phone off?

Dr. April: When are you shopping? How much are you shopping for?

Dr. Tony: Yeah. All five of those W’s

Dr. Sharp: Of course.

Dr. April: Yeah. And that’s really interesting. […]and he’s like a brilliant data scientist, he just got inducted to the [00:19:00] European academy of science. You have to be a really good drinker when you go out with peers to keep up for the night. And so I just have to really check my memory on these conversations, but he’s like, I only need to know two things about you to know who you are. He’s like, if I know on Saturday night you’re on the street where there happens to be a synagogue, I almost certainly know that you’re Jewish and I also know a few things about you. And if I know one more thing I can tell you, probably what region of the country you live in, like your name or whatever. So, he was like, I really only need very limited actual data points to know a lot about people. People are generating a lot of data points and there were people who were incredibly clever about knowing which data points say what.

And so there are some that it’s like very reductionist. And then when it comes to predicting suicidality, [00:20:00] what happens when we talk about this, is it someone says, okay, so which three data points tell us someone’s going to kill themselves? What is it that really causes it? And what we say is, well, the algorithms are more complicated than these three risk factors that you’ve heard on the public service announcement model. And so we really don’t know that. We might be able to figure that out someday, but we don’t know that yet.

Dr. Tony: This enters me onto a soapbox, which is, please let go of risk factors as soon as possible.

Dr. April: It is not a thing.

Dr. Tony: It is not true.

Dr. April: It was a PR thing.

Dr. Tony: Yeah, mostly.

Dr. April: There was no agreed-upon set of suicide risk factors that every literature agrees on. And there is no one thing that’s suicidology back. Like this was a thing that, and I’m on the actual lines where we communicate about this, but it’s just really a PR thing.

Dr. Tony: And if you want the paper, I believe it’s Franklin et al 2016, maybe. Joe Franklin, now he’s at the University of Florida. He came out of Harvard. He was at Harvard. [00:21:00] So point of the story is that he did a multi-study review and none of the risk factors have a predictive factor more than chance.

Dr. April: So what happens when psychologists go to do these assessments clinically, they tell you to pay attention to risk factors. They give different psychologists different risk factor templates. All of these templates, including the Columbia, which is one of our better ones, shout out to Kelly Posner, who is the Angelica Houston of suicidology, gorgeous, wonderful thing. But it’s even our best ones. Like the Columbia is like as good as, or worse, slightly worse than chance.

Dr. Tony: As a quiet moment in time and it’s the best thing that a lot of people don’t understand.

Dr. April: she seemed to let go… So what happens is I’ll talk to a psychologist or licensed clinicians, and they’ll be like, oh, they told me this. So I knew they weren’t going to get. And I’m like, these clinical narratives about suicide, first of all, if you’ve got that kind of like they said this, I knew that they weren’t [00:22:00] evidence-based, you should not do those ever anyway. And I mean, if you’re going to do a crystal ball and join a sideshow, because really what we’re talking about is the fact that we just didn’t ever understand suicide well enough to behave like that. we just haven’t.

But we’ve held licensed providers very accountable without having science to back it up. And this is the idea that now let’s give us some science to back it up. So we know that these algorithms have been developed that assess risk, which is different than predicting an attempt, right? Assessing risk versus predicting an attempt. Your audience knows that. What we know is that the algorithms that assess risk are probably operating 40% better than well-trained clinicians, some of the best suicidologists, as a matter of fact.

We’ve participated in some data science efforts to get data to take language data from suicide watch on Reddit and have clinicians assign risk. And then see if you can get the [00:23:00] algorithm to find the risky posts. And what we know is that compared to an average clinician, these algorithms outperform them. And may even outperform excellent clinicians.

So right now in psychology, we’re practically reading the vapors, we’re tasting people’s urine to tell what kind of metal. Well, I mean, people did that, right? Even a hundred years ago. But we’re operating at this level, but data science will allow us to do some better things with things like you need to know the false positive rate. You need to know how to ethically do this research, things like that.

Dr. Sharp: Of course.

Dr. Tony: Stethoscope is much better than an ear horn.

Dr. Sharp: Say that again.

Dr. Tony: A stethoscope is much better than an ear horn. And that’s really what this is. It’s another tool. It’s not magic.

Dr. Sharp: Right. It’s an evolution in a way.

Dr. Tony: It’s a direct descendant of [00:24:00] evidence-based measures.

Dr. Sharp: Right. I feel like I need to ask because I’m guessing that people are listening and asking, well then what do we do? I mean, if the risk factors don’t work, if we’re bad at assessing suicidality but we don’t have access to these algorithms, what do we do?

Dr. April: I’ll give you three things to do.

Dr. Tony: And one of my favorites, just be kind. Try that. Try not to be afraid of your suicidal patients because you can’t predict their suicidal behavior any more than they can.

Dr. April: And maybe less. He’s not wrong. So number one, we’ve all got to operate in the land of liability. If you don’t have the training, get it. And get and renew training about assessing and intervening with people who are suicidal every licensure cycle that you have. The training i`s pretty easy to get. There’s very expensive training. Get CAMS or DVT [00:25:00] training which by the way is excellent. It was shown to reduce suicidal thoughts and behaviors. So it has evidence-based. Harder therapy to do, have. It works great. I’ve seen it work with folks who have really profound chronic risk and watching them recover. But whatever training you can get that’s evidence-based, get it.

Have policies in your clinic or in your practice. Then follow your training and follow your policies. Do them consistently. And don’t be a jerk to your patients. And I’m a person who works with high-risk folks. And I would tell you that there’s a certain amount of bravery and continuing to practice knowing that there are tools being developed and you don’t have them yet.

And then I would say, instead of being afraid of what’s coming or maybe saying it or treating it like it’s going to be magic, just stay really educated because these are the tools. AI will be like electricity was in the industry a hundred years ago. You already have AI if you’re using like your Google or your Outlook, there’s already [00:26:00] AI features that are making things like your email work better.

You’re going to increasingly see these in the ways you manage much of your life or if you bought something on Amazon. When it goes wrong, like when Facebook always keeps recommending that I rent plus size clothing, like get a box, wear the clothes and then give them back, and not my right size. And I’m not really a clothing renter but Facebook really thinks I am. So AI will get some things right and some things wrong. Just keep yourself a little educated. Don’t be resistant. And don’t overly buy-in. I think those are really good coping skills.

Dr. Tony: The security industry was an early adopter and then second-most to the finance industry. And so if you make purchases with electronic money, then AI monitors all of those for fraud. It monitors all of those patterns and determines what human behaviors are appropriate and which ones are not, which ones should be flagged for human review, and which ones are just definitely wrong and [00:27:00] they need to be shut down.

Dr. April: The financial industry has a lot more money than we have and a lot more liability than we have. And they’re amazing at what they do. And if you have fraud settings like I do, they have a wonderful human-AI hybrid so that when I have my fraud settings, China got my data and federal employees in China got my data, and so now I really monitor things. And it’s caught several things. I’ve never lost a dollar then knock on wood, but sometimes it’s flagged some of my purchasing behavior very occasionally, but a few times a year. And then you call in and you have a system for doing that.

So just think about that in our clinical practice. What we’re going to be doing is using the tools that are developed in a human-AI hybrid where you’re going to get information and then you’re still going to use your clinical judgment and develop processes to keep people even safer and make sure to take care of.

Dr. Sharp: I love this.

Dr. Tony: Multichannel, always on. [00:28:00]

Dr. Sharp: Yeah. Well, I think a lot of people are… this is an unfamiliar concept. It’s a little bit scary. And we hear about big data and tech companies taking over the world and that kind of stuff. There’s just a negative association. That’s true.

Dr. April: I just talked about that today. I was like, I don’t know if it’s going to be good, but anyway, keep calm.

Dr. Sharp: I think for a lot of us, there’s some cognitive dissonance to resolve with this too. Okay, well now, how do we use a tool like this that has a negative association with something that is very important in our field?

Dr. April: I would say, first of all, just advocate for better science. I think that our field should start to say, you’re holding me responsible without the science to support me. And that’s not okay. We don’t fund very much of the science, but we hold therapists very accountable. So I think we should be joining with our friends who are advocating for leading-edge science saying this needs to be funded and we need good evidence-based.

We were talking about [00:29:00] ethically. So the ways that we’ve been able to do these projects because I think people are very worried about your data being used without your permission or whatever. But we actually are part of a small group of people that came up with the concept of data donation for this kind of thing. The reason we’ve been able to do these projects was without very many resources actually. It was coming up with a way for people to donate their social media and digital data to then give to scientists who pass IRB and get all the appropriate approvals. And we’ve now done this with two data sets.

And so we’ve now got a model to very quickly get data to data scientists to work on, but it’s data that’s donated. It’s data that’s ethical. Because one of the big challenges with this, the big barrier with this science, is getting it to the hands of clinicians so they can be using this and we can do what we all care about, which is helping people recover and live good lives.

Dr. Tony: And advertisers haven’t and creditors haven’t.

Dr. April: But we want to use [00:30:00] it and not always ethically. Donation is one way that you can do that ethically. And we can talk about some really crazy stuff if your viewers wanna hear some crazy stuff. We know some crazy scenarios that we think are interesting and ethical. But this data can be donated ethically. It can be released when people come to apply to use it who previously obtained IRB approval. We’ll make them read ethics articles and agree to secure handling of the data and a bunch of other things.

So if we can get datasets into data scientists’ hands, and if we can get things like let’s just fund suicide research at the level of the other top 10 healthcare concerns in the country, just the level and we’re not asking for extra, just at that level of impact, there will be plenty of money to do this research if we get people’s data. So can we get people’s data? And then can we fund that research so that clinicians have the tools and the science to support what we are already holding you accountable for?

Dr. Sharp: Such a good point. Yeah. The idea that we’re [00:31:00] already being held accountable for those is crucial, but yet it’s not backed by science. I just want to highlight that. I don’t know that a lot of us are probably thinking about that fact.

So let’s operationalize this a little bit. What does this actually look like? I mean, how are you getting this data? I think ethics is a part of that, of course. How are you gathering all of this and what are you actually doing with it to create some usable information? I’m curious in the nuts and bolts and I think other people might be as well.

Dr. April: So if you want to go see an example online, dear listener, please go to ourdatahelps.org and veterans can go to warriorsconnect.org. There are various ways. Tony is the CEO at Qntify who has just [00:32:00] generously donated hosting of this data. And it’s the same real way that you collect data say for coupons where you say, oh, to connect to your Facebook account and you get a 10% off coupon. There are applications that have been used to make your user experience pretty seamless. And there’s an informed consent that we wrote with people with lived experience of suicide attempt survivors and people who had survived a suicide death together.

So they helped put it in plain language and they proved it. So we really worked collaboratively on this. And then you can donate your data and it works retroactively to donate your data in the past and you control what you donate. If you want your Facebook data, your Twitter, your Fitbit, and then you can donate going forward and [..] I don’t even know if I’m using language but you can donate going forward and you can stop at any time. And that’s one way that you can donate data.

You can also [00:33:00] donate the data if you are the account holder for someone who’s died by suicide. So we can then get the data of someone who’s died by suicide. And we can look at that and compare that to folks who had attempts and survived or folks that have never been suicidal.

Dr. Tony: And then for veterans and military, it’s  warriorsconnect.ourdatahelps.org. And that’s a project for the George W. Bush Institute and their Center for Veterans Health.

Dr. April: So it’s like real people, fancy people.

Dr. Sharp: Right. Real fancy people. I have to ask. Is there a selection bias at work here if people are donating their data?

Dr. April: For sure. Right now. But…

Dr. Tony: The beauty of this is you’re looking for a sample of conditions that are either self-stated or diagnosed. And once [00:34:00] you have a significant sample of self or diagnosed conditions, then you can go out and validate that against the general population using data that’s openly available. So you can do a much better job than you could do when a traditional research setting without the ability to do those comparisons.

There’s just so much data flowing out around there. Suicide data is really interesting. And so, trying to predict suicidal behavior is a funny problem because it’s not really a big data problem. Not really. It’s a small data problem. So, we’re talking about very precise models versus ones that are much more general models like determining somebody’s age or somebody’s gender or somebody’s race or their ethnicity. Those are big data problems. This is a very small data problem because the number of people who attempt suicide is relatively small. It’s big, but it’s relatively small. And the people who go on to die by suicide which we want to actually predict is very small in the United States.

The ratio is [00:35:00] something like 1.2 million-plus attempts annually versus 6,000 deaths. So your true positives get very small very fast. So that’s how you deal with that is that you get as good of a sample as you can from people who have experienced suicidal behavior and of course your decedent data.

Dr. April: And they can get timestamps. They can say, this is when that happened and here’s all my data. So then you know. When you think about a number of clinical trials or research on this, could it be a better sample? Yeah, but there are so few data sets and this research is so hard to do. That’s a step forward.

Dr. Tony: And for psychologists, these numbers are huge.

Dr. April: Oh yeah. It’s not 200 people. We’ve got like 6000 or 8000.

Dr. Tony: It’s not 75 people on our campus, we’re talking about 3000, 6000, 9000 individuals. It breaks down when we get into individual conditions. We have a really great [00:36:00] data set in a partnership with the University of Maryland on a long-term schizophrenia study. So we have a great model for schizophrenia. We have a great model for anxiety disorder. We have great stuff for major depressive disorder. When you combine these, you end up with a whole picture of a person that is much more accurate than an individual clinician could do in the space of time that they have to provide care. So this is the way that you could… like I said, you could use an ear horn to figure out people’s hearts but it’s much easier to use a stethoscope.

Dr. April: Or like strap them to something, right? And I think what’s really interesting is that these can really lead to the development of monitoring tools. Craig Brian, one of the best suicidologists in the world who uses this kind of analysis says, suicide risk is very fluid. So maybe what you’re doing is you’re having a high-risk patient who says, I’m going to put this app on my phone, it’s going to [00:37:00] monitor my digital activity and it’s going to alert me and other people if there’s a change much like you might wear a harness. If you have a heart condition, they might have you wear a harness to be measuring what’s happening with your heart even when you’re not in the doctor’s office because most of the things related to your fluid suicide risk don’t happen in doctor’s offices. And in doctor’s offices and psychologist offices as most of your listeners know, we do a lot to make that environment very standardized, very stabilizing.

So we do a lot of things in our office that might mitigate risk or reduce expression of risk. And so we might have clients that look pretty together in our office. The minute they go out in a different environment are not going to be doing as well. So these tools could possibly be collecting digital signals and alerting people like a pacemaker, like my father-in-law’s pacemaker, right? Like those things could happen.

Dr. Sharp: Right. I’m glad you brought this up. I was going to ask those questions, but here we are. So just to put a fine point on it, are [00:38:00] we at a place where there is an app that our patients could put on their phones that would do this, yet?

Dr. April: Those things are regulated by the FDA. CSo could that happen technologically? Absolutely. But I would say, in our field, what we would really need to do is have better funding to do this at a bigger scale so you can have FDA trials because if you’re monitoring a lethal health condition with an application, not only does the algorithm need to work, but a bunch of other aspects of the technology design needs to be pretty reliable and work, and then they need to be extensively reviewed for safety. That can totally be done. I mean, we’re developing a vaccine for COVID in like a few months, right? It feels like forever when it’s only been a few months. And I hope someone listens to this podcast in10 years and goes, “Oh yeah, there was a pandemic.” This can be done with enough resources, but it really does take the same level of attention and development that other things in healthcare take.

[00:39:00] Dr. Sharp: Of course.

Dr. Tony: Regulations on medical devices is one track that this burgeoning industry of AI and behavioral health has headed down. There’s a whole collection of folks that are head down the medical device route, which is FDA. And then there’s a whole bunch of other people that are headed down the other direction, which is more open tech stuff. They could download the app store. We’re on the brick. My little company Qntify, we’re on the bridge of between. We’re doing pilot projects with healthcare systems that are closed pilots that are available to select patient populations. And so soon enough, we think that this technology will be widely available, whether it’s us or somebody else.

Dr. April: And when you want to road test this thing safely. And I have done things like using moods 2, 4, 7 and using it in a clinic with high-risk patients. Go look that up I think I did an article about this called just text me like 10 years ago in the journal of Collaborative Patient Care. But the issue is, if you’re going to use things like this [00:40:00] or applications that work similarly, there are just some real fundamental things, like these never take the place of good clinical care. You still need a standard safety plan. You can use them as an adjunct, but not in place. A good monitoring, good safety planning, responsive clinical care access to emergency and crisis services, et cetera.

Dr. Sharp: Sure. That’s incredible. I know technologically, it sounds like we could do it. This call for funding is a theme throughout our conversation so far. So I want to highlight that.

Dr. April: If you fund things, at like $100,000 a year or what? No, it’s a few million a year. I’m sorry. If you fund suicide research at the same rate that you fund smallpox research, PS: No one’s had smallpox died off in the US in like forever. If you do that, guess what you won’t get? You won’t get advances in science. I’m not saying that funding solves every problem, but it’s like literally the one thing [00:41:00] we haven’t tried, which is funding science at the scale of impact.

Dr. Sharp: Let’s take a quick break to hear from our featured partner. With children currently exposed to conditions including a global pandemic, social injustice, natural disasters, and isolation, you need a trusted tool that can screen for symptoms of trauma quickly. The TSCC screening form allows you to quickly screen children ages 8-17 years for symptoms of trauma and determines a follow-up evaluation and treatment is warranted.

The TSCYC screening form does the same for children ages 3-12 years. Both forms are available in Spanish and support the trauma-informed care approach to treatment. These screening forms are now available through PARiConnect-PAR’s online assessment platform, which provides you with results even faster. Learn [00:42:00] more at parinc.com\tscc_sf or parinc.com\tscyc_sf.

All right, let’s get back to the podcast.

Well, if there’s anybody out there who happens to be able to fund the science at the scale of impact, please.

Dr. Tony: All congressmen and senators, please, consider it carefully.

Dr. April: I’m a federal employee, so I make no statement that would be in violation of the health act.

Dr. Sharp: Of course, I will say it for you. Well, there’s definitely potential. I mean, the science is incredible. You talked about models for anxiety and depression. One thing I wanted to ask you all about just from reviewing some of the research in this area is, it looks like there is maybe some work into cultural differences in language as well. Particularly around depression, there was an article that I paid attention to. So I wonder, can either of you speak to that in [00:43:00] any detail and what that research is about?

Dr. April: That’s science for identifying depression anxiety even across populations, because that’s so common, right? That’s like 1 out of every3 or 4 people in the population. And you can actually do models that are pretty specific to a lot of cultural, gender, geographic issues. Tony, what do you say?

Dr. Tony: Oh, absolutely. As far as I know, our group at Qntfy first makes these algorithms actually and tests them, but we have race and ethnicity classifiers stepped on in our neural nets for anxiety disorder and major depressive disorder, especially, but we also work on the schizophrenia model. So we sure take those racial and ethnic and language differences into account by stacking those algorithms together and hooking them together in a  very technical way. But that’s ultimately what you’re doing.

Dr. April: And let’s talk about ethics again, because algorithms can be sexist and algorithms can be racist. And they are. So, [00:44:00] the great news is that some of the people that we’ve collaborated with are on those tech industry panels looking at racism and bias sexism abuse in AI. So I think Meg Mitchell is on one of those. I think she’s at Google right now. If you want to know Meg Mitchell, you can see her, if you go on Netflix, she’s on the episode, Bill Nye, the science guy, episode 3, showing how you can use AI to identify pictures. And we have friends like Meg who are familiar with this research and have been around as it has been developed.

And they’re really thinking about bias racism, being cis-gendered, sexism, things like that in our predictive algorithms, and making sure that when we’re designing things, we aren’t designing things that are replicating our own oppressive biases. And there are people in the AI industry interested in that. And I think folks like us like to talk to those people.

Dr. Tony: Meg is a pioneer on the [00:45:00] computer vision specifically, which as you can imagine as a hot item for augmented reality and for self-driving cars and for airplanes, for all other things. But ultimately, she has invented technology that allows a set of algorithms to look at a picture and identify the elements in it and tell you things about it. So it can show you a dog in autumn, and it can tell you that it is a dog and it’s a Labrador. And then that’s a pine tree. And this is sometime in the fall.

Dr. April: And recognizing a dog on the internet and differentiating from a cat or a bear is actually harder than you think. So there are people who are thinking about those ethical things. And so I think if you were one of those like sorry cranky grandpa listeners out there, you’d be like, it’s got all these problems. And it absolutely does just like all technology does, but we as humans are worse and you got no minder to make sure I’m not a racist or that I am not a sexist when I’m with my patients. You just hope I’m a decent person, right? [00:46:00] So I think that you’ve got to look at those things.

Dr. Tony: Absolutely. It’s a balance. Once again, you wouldn’t use a blood pressure cuff to diagnose hypertension. So don’t. You wouldn’t use these tools to ultimately diagnose a disease. You might use them to help you diagnose one, but it wouldn’t do it for you. It doesn’t do it for you in that specific way. It gives you a great score on what’s probable. I’ll tell you that. And it’s very accurate, but you wouldn’t want to use it to diagnose diseases at this point because we’re just not at that level, but maybe someday.

Dr. Sharp: Sure. I think that’s the important thing to keep in mind. Like it’s easy like you’re saying to take that cranky grandpa approach and naysay, and this is not perfect and whatever, but it’s getting better and it seems like in a lot of cases, better than what we can do just as clinicians on our own.

Dr. April: If your primary care doctor [00:47:00] tests your diabetes risk by testing your urine, you need to get a better doctor. And if they’re grumpy because there is bias in the lab test and the lab tests have error so that’s what they choose to do instead, you would go, “Oh, you should not have a license.” Similarly, we should be looking at that. It is not, is the AI perfect? It is, how is this better or more accurate? What direction are we moving towards in terms of accuracy and help? And ultimately, it should be a contribution to people getting well. And it does this move us forward. Don’t get cranky unless you think it’s harming people and not moving this forward.

Dr. Sharp: Yeah, that makes sense. Well, I think that with a lot of this stuff, it’s just fear. Fear is driving a lot of these reactions

Dr. April: We’re experts in modulating irrational fears to help be good copers and problem solvers. So I expect more from us, Jeremy.

Dr. Sharp: Fair enough. [00:48:00] Okay. No, this is good. I think it’s true. I have to ask the question that I’m sure y’all have been asked before. It comes up in every discussion about AI in any sort of medical context, but is this going to take our jobs?

Dr. Tony: No.

Dr. April: Maybe

Dr. Sharp: Let’s dig into that. Okay.

Dr. Tony: The problem is so big that it’s hard for us to imagine in the current system. So there are so many people that do not receive any care in the world right now that the capacity issue as we perceive it today is not the same. Now, could these psychologist jobs be very different in 25 years? Sure. But it’s not going to take their jobs away. Not at all. There’s just too much.

The beauty of logarithms and the beauty of this [00:49:00] orders of magnitude growth is possible to make big changes in short periods of time. And that’s what most people are afraid of, but it’s not going to take away psychologists. The reality is in the short to medium term. You need more of it, not less.

Now, we need more of a specific kind. We need people who understand the technology to a level they can contribute. That’s a thing that’s very likely to happen. The requirement of these future jobs, everybody is now expected to be able to use a word processor. There’s no exception. There’s no pretty much no job that I can think of. Even the most average electrician’s assistant has to be able to tick boxes on their tablet to keep track of their work. So there’s that kind of change, it is going to happen. Definitely, the EMR, EHR systems are going to continue to be more aware and the clinicians will be expected to navigate that, [00:50:00] but it’s not like displacement. Not like we’re thinking in the short term.

Dr. April: I see it differently, but that’s okay, right?

Dr. Tony: Maybe. It depends on what you’re seeing.

Dr April: We have a good time. If you’re listening to call us up, we’ll go out, right? So here’s the deal. If you want to keep doing your therapy practice like they did, and walk-in talk therapy on a couch because you want to see 6 clients a day and do handwritten notes when someone lays on the couch and talks to you and you do the talking therapy from the Victorian era, yes, this is going to replace that. And if you think that that is what’s best, you need to double down. I don’t think that. I think that was a very good advancement for the 1800s, but it is the 2000 kids.

So I don’t go to my cobblers to make shoes. There are some [00:51:00] artisanal cobblers still making shoes. I’ll make them for my individual feet, but I wear shoes that were probably manufactured by people in a probably third-world country and probably not really proud of that. And then they were shipped over here and I might’ve bought them from Zappos. Like how we all get shoes that are cheaper, better, and faster in some ways, and other ways it’s bad for the environment and human rights, let’s not get into that just for a minute. Let’s just talk about changes. We get to choose differently. We don’t go to cobblers, but people are still making shoes and we’re still wearing them.

The issue quite simply is that helping people get well, we’re not doing it scale. So when we looked at the number of people who were at high risk for suicidal, our best estimates, then I applied slim models for it. What we know is evidence-based like what someone should get when they walk into a clinic, for us to just manage as we have been doing, the suicidal patient…. So imagine it’s the Sims and that blue light comes over the head of everyone and is suicidal. So we don’t even have to find them and assess them. You [00:52:00] just see them. We would have to take every year a licensed mental health provider from clinical social worker to a psychiatrist and everyone in between who’s independently licensed. And they would need to have 50-60 people on their caseload. And we would have to employ them full-time working at the top of their license, doing nothing else but suicide care.

So if you needed substance abuse care, sorry. If you were autistic, too bad. And only that and doing 50-60 folks at high risk. And so with our current clinical models, the standard recommended caseload is two people. So we aren’t set up to do what we have been doing at scale. We do have effective treatments. They do work, I practice those. They do not work at scale. And so the way we have been doing things was great for the 1800s if you only needed to treat six people, but it’s not going to work for 8-9 million people or the 800,000 people in the world that are going to die this year.

So we’re going to have to do something else, which means your job is [00:53:00] going to be different and it may change. And if you were later in your career, maybe… I remember a guy who told me we were watching Marshall Linehan present at a conference and there was a guy telling me that he would retire before social media would affect him in his job. And I remember tweeting about him and mentioning his name. Like people being assholes. Pardon my French bleep this out. If this is a G-rated podcast. But being like that, like that my desire to not change is more important than the need of the people who are sick.

It is just so off mission. And so beyond me. And that level of resistance is so not acceptable to me in our profession. So, I’m just going to be like that antagonist. I’m not going to end up on your bad list, hopefully. Hopefully, you’ll think that’s like a force for good. But I think our jobs are going to change. I think that they’re going to have to, and if you are really married to the way you do your job and not [00:54:00] the outcome of your job, you may be very unhappy.

Dr. Tony: And it’s likely to become more obvious what your outcomes are. I will say that. I will add that, I think that everyone would agree that the golden age of psychoanalysis in the United States was the 1960s, And then it really didn’t go anywhere. That all kind of went away for various reasons, mostly related to insurance, but then some other things related to the outcomes.

It’s a boutique therapy. It works for some people. It’s nothing against psychoanalysis. However, that age isn’t coming back. It does not appear any time in the future. And this technology is not going to help bring that back. So we’re in the same boat here that outcomes we’re going to be more and more focused on outcomes and better outcomes faster than we ever were in the past. The brilliant part of that is that right now you’re held accountable for outcomes [00:55:00] and you don’t really have the tools to produce them. That will change.

Dr. Sharp: There’s so much to sort through here. So much to think about, but the point of we’re moving forward and not backward. Technology, I use that term very broadly, but that’s where we’re going. People attest specific to assessment, right? Neuropsychology testing and personality assessments and so forth. That’s what we’re doing. I mean, the tests now are being developed and normed and standardized on digital means of administration. And that’s it. We’re headed in that direction.

Dr. April: And I don’t mean to speak at your audience like they are the cranky grandpa. There are a whole lot of people carrying the waters for innovation and progress. I think there are a whole lot of people out there who recognize the need and just needed somebody to put some words on the frustration of being held accountable for something that no one’s invested in the technology to support. And I want you to hear that level [00:56:00] of frustration and anxiety, that’s pretty reasonable. And the way forward to solving the problem.

Dr. Sharp: Right. I think you were giving people a lot to think about here, which is good.

Dr. April: If you take me drinking, I’ll just say three things.

Dr. Tony: If they don’t scream us off the podcast, we’re happy to come back.

Dr. Sharp: I love it. Well, could we talk a little bit more, just if there’s anything else to add in terms of the real-world application here just from like normal clinician going through their practice day to day. What could we do tomorrow if we want it to, or what do we get involved in? How does this come into play?

Dr. April: Technology things tomorrow. For clients who have survived a suicide death, consider having them donate data. For clients who would like to see an advance in science, letting them know that they can advocate and donate data. These are things that they can do. Let them know that. Let them see. I think people are [00:57:00] curious. I think of my cousin who eventually died of cancer and she was incredibly interested in all the latest treatments coming out and what’s happening. So I think like, let people know what’s coming because I think that gives people some hope that feeling suicidal could have a way of tracking, measuring and responding more reliably. And wouldn’t that be great letting folks advocate for research that would benefit them? I think those are things we can do.

There are apps that I think are great. And I wouldn’t recommend. People often say, give me the names of three apps that you would recommend. I think they just took mood 247 offline after a decade, but apps that will track your mood on a scale of 1-10 and let you enter a journal article or like a journal entry and talk about what’s happening and share with your clinician and graph. Those are great. And those are surprisingly good for folks that diary cards don’t work. Doing that, you go from like an [00:58:00] adherence of about 11%. And if your therapist always looks at it and hearing some of that 91% charting mood and diary generation.

My clients at high risk for suicide were incredibly responsive to several different applications that are DBT diary cards. And they were more adherent with keeping their DBT diary cards using applications, and those included suicide measures. I think that’s super cool. There are free ones and ones that are for pay. And probably, my last two years, when I was doing clients full time, I had most of my members of my group using, by their choice, they could use the paper diary card or pick an application. And I had almost everybody using applications, not because I told them to but because they work really well and their adherence for all their sessions was really high. So have folks start to do that. Other thoughts?

Dr. Tony: Do what you can to [00:59:00] collect data from your patients in a reasonable and ethical manner. Have a policy ready to deal with managing that data. There’s a lot of choices. You’ve got a lot of choices to make as a clinician. Sometimes you work at a clinic that dictates all that to you, but if you’re an individual clinician, you’ve got a lot of leeway as to how you’d like to provide care. Take a look at the current app infrastructure. There’s a lot of mood and diary apps, and there may be ones that you prefer as a clinician to others. And there may be ones that your patients prefer. You can be pretty flexible. You get the same output pretty much no matter which way you slice it. That’s stuff you can do tomorrow.

Dr. April: And if you work in healthcare systems and they start to say like, which clinics would like to participate with this volunteer. So Qntify works with healthcare systems where they’re collecting information to reduce suicide deaths or to engage in improving mental health care some other way.

[01:00:00] Virna Little, who is also a very well-respected suicidologist use data in the mental health record to reduce overall suicide deaths across 75 or so federally qualified healthcare homes in the Manhattan area largely because they were able to identify data sets within their clinical data sets of folks that were more likely to die. And they weren’t what people thought. And that was usually folks who had a blood sugar regulation disorder, like diabetes and the anti bipolar diagnosis, and they got different primary care.

So, if there’s an opportunity to do that and you belong to one of those systems, or you as a clinician might want to hook up with folks who are, just get involved. I feel like there are so many little steps that you can do that I think the more that we chip away at it, they’ll just be a tipping point. I think this will change.

Dr. Sharp: That’s great. So before we wrap up, two things, I would love to close just [01:01:00] with resources and things that people can look at. But before we do that, I want to come back to what you said before we started recording, which is this conference about uploading intelligence or consciousness?

Dr. April: Oh my gosh.

Dr. Sharp: What’s going on there?

Dr. April: I don’t know how you edit your podcast, but this is where you would go whoo! It’s like spooky music. This is real stuff. Real science that’s happening. Tony would tell you when I started this, I’m a very technology averse person. I was the last person to get a cell phone to text or whatever. And so I’ve used my coping skills in the service of helping patients because I came to believe things had to change. And so I started with myself. I was maybe the worst client you’ve ever worked with. Awful crying if you change my email. I wasn’t good at coping. So now I go to AI conferences because I want to learn what the trends are in artificial intelligence and [01:02:00] machine learning and data science so that I can start to think about how they apply to my field because we just weren’t going. Those things are free or cheap to go to like, a few hundred. I can afford that.

And for us to know what’s coming in the big world, and there were speakers from like Pfizer or from Optum, like for major healthcare systems and big pharmacies. And they’re looking at that. And so if we don’t want to be the poor cousins in mental health, we could just go to the same events in technology that everyone else does. So I went virtually because it’s a pandemic for posterity if you’re listening to this 10 years from now, and I hope you are, and they were showing a project that someone in the tech industry founded. So there’s a foundation and then there’s a nonprofit that supports it where they encourage people to start to upload their consciousness.

And I’m not saying we installed like a little output in your head and we plugged you into the machine. But what they did was they had them create a ton of data [01:03:00] about themselves and tell they could put it into a little like an uncanny valley robotic AI head. And you could talk to it as if it, and it would respond with the personality of the person that uploaded it. And they’ve gotten pretty good at it. So they did some demonstrations of it. And I think they were talking to a woman who was a social worker because it was very fun.

And it occurred to me one of the really hard things to do in suicidology is do research on people who died by suicide that’s ethical because once somebody has died, it’s a little bit too late to do experimental research where you try things and try to get different outcomes, right? Because one outcome is death and there are some real ethics. And so I’m like, oh, this would be about the best way I know. You had a ton of artificial intelligence of people who died by suicide and folks who didn’t and you could try different therapies or experiments to see what worked and what didn’t.

So I’ve talked with some folks. That’s actually possible. Now, is that going to happen today? No, I would have to really [01:04:00] deal and form a lot of relationships and get a lot of willing people to do it. And some of those might die by suicide. But there really could come a day, and that’s what this foundation is working towards, where people put a lot of their digital data about themselves in such a way that we have a good idea about how their personality would respond. And those data sets could be used for doing research on suicide without actually impacting a real human being’s life. And maybe then we could do some more experimental things in our field. And I think that’s cool. Also creepy.

Dr. Sharp: Okay. It can be both, right? We can live in that gray area. Cool and creepy.

Dr. Tony: And imagine being able to merge data from high-resolution MRI and blood work and a full medical record and digital life data, and anything else that you could think of to be able to build better models and better techniques and better tools to fight this problem.

Dr. Sharp: [01:05:00] Right. It’s so exciting. I know that it’s also terrifying and this whole big data, what are people doing with it? It is terrifying. And to talk with them all, it’s nice to hear that there are folks out there who are trying to do it consciously, ethically, and mindfully to truly help people. That’s pretty amazing.

Dr. April: Well, thank you. I think that there are a lot of people in our profession who really do care about changing the world. And this is a transformative period in history. So that there’s never been a better time in history to care about it. And during periods of change, you can pick where you want to be in the change. And I think that there are hopefully, a lot of your audience. And I hope that you’ll join us because this can help the problem suicide, but it can do so much for mental health beyond that, I think.

Dr. Sharp: Of course. Well, if people want [01:06:00] to dig into this, want to learn more, want to learn more about you, want to learn more about the tech, I mean, I’ve been taking copious notes. We’ll have pretty lengthy show notes. B`ut are there any big resources, websites, places people can go?

Dr. April: What do you think about the Qntify peer-reviewed? Like there’s some really good papers on Qntify,

Dr. Tony: Qntify research. You’ll see our published research from our team. There’s a bunch of papers up there. I think some of those will be quite exciting to serve your listeners.

Dr. April: And if you want to go to the journal of suicide and life-threatening behavior, Tony is the board chair of the American Association of Suicidology. I’m on the executive committee and Thomas Joiner is the editor of that journal. And Dr. Phil Breznik was the first author. I was the second author of an article published about this topic. I think some fundamentals just like last month. So October 2020. So if you want to get Resnick et al, in suicide and life-threatening behavior, I think you can read it. I think it is a pretty good [01:07:00] article targeted for our field.

Dr. Sharp: Awesome. Thank you all. Thank you all so much. I’m guessing that there will be some discussion around this. I hope that people are responding and we’ll share some thoughts around here, but maybe this is just round one. I would love to talk with y’all again if the opportunity presents itself. But for now, thank you so much. It was great to chat with you.

Dr. April: Thank you for having us. Folks if you’re listening and you had a good time, Jeremy is incredibly conscientious and fabulous to work with. So hit likes, leave good reviews, stars, whatever platform that’s on to support this because I think this is a great way for cool information about our field to get out there in a timely way.

Dr. Sharp: Okay. Y’all thank you so much for listening to this episode. If you have not checked out the show notes, I would definitely do that. There are a ton of links there with all of the information, people, resources that April and Tony mentioned. And like they said, [01:08:00] I definitely want to call your attention to the page on the Qntify website that has all of the relevant research that we talked through today. There’s a lot to sort through there and it’s so compelling. So I hope that if nothing else, you took away some hope that we have tools out there that are getting better and better by the day to help folks who really need it.

Now, if you’re an advanced practice owner and you would like to get some support of your own and building your practice and taking it to that next level, I’d love to help you with that. My Advanced Practice Mastermind Group is a group coaching experience where about 5 or 6 psychologists get together and keep each other accountable to set and reach goals in their practices around hiring, streamlining, making things more efficient, increasing your income of course and your [01:09:00] impact with your clients. If that sounds interesting, I think we do have two spots left at least at this point. You can find out more at thetestingpsychologist.com/advanced.

Okay, I will catch you all next time with another business episode coming up this Thursday. Take care in the meantime. Bye, y’all.

The information contained in this podcast and on The Testing Psychologists website is intended for informational and educational purposes only. Nothing in this podcast or on the website is intended to be a substitute for professional, psychological, psychiatric, or medical advice, diagnosis, or treatment. [01:10:00] Please note that no doctor-patient relationship is formed here. And similarly, no supervisory or consultative relationship is formed between the host or guests of this podcast and listeners of this podcast. If you need the qualified advice of any mental health pro practitioner or medical provider, please seek one in your area. Similarly, if you need supervision on clinical matters, please find a supervisor with expertise that fits your needs.

Click here to listen instead!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.