PhD/PsyD PHQ-9 ?

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

quickpsych

Clinical Psychologist
10+ Year Member
Joined
Feb 26, 2013
Messages
172
Reaction score
166
I've noticed, and maybe this isn't anything too new, a trend towards more of a push to use the PHQ-9 as a standardized screener and quantitative way of measuring patient improvement/outcomes. Or diagnosing depression in patients.

But what concerns me is seeing it used in facilities and settings that serve populations where a number of the PHQ-9 symptom items can easily be symptoms of the populations' common medical conditions. For example, why use the PHQ-9 used in facilities where individuals go inpatient after sustaining an injury? Or simply facilities where the population is much older? Items asking about sleep, fatigue, appetite, movement...these are all common things impacted by an injury, aging, or both.

Am I missing something about the PHQ-9 and perhaps an ability to screen out false positives for falling in a clinical range for depression due to comorbid physically related symptoms?

Sure I would expect most well trained psychologists to be able to differentiate and make a clinical judgement call on PHQ-9 scores viewed alongside patient presentation and recent history. But what about the facilities, bean counters, and others who look at the scores and go "well they're depressed, says right here on the scores!"

Members don't see this ad.
 
  • Like
Reactions: 4 users
For psychiatrists, this is taught to us, and we often use the geriatric depression scale for older patients/patients with multiple medical comorbidities. Especially in the inpatient setting.

I do like the PHQ-9 not to diagnose depression, but as a quick little tool for patients to have a way to sort of monitor their symptoms. It helps because often patients don't know how they should feel on/off medication. In general I diagnose based off clinical interview, any any scale I may use is just to offer some more info, especially for people who are poor historians.
 
  • Like
Reactions: 1 users
I've noticed, and maybe this isn't anything too new, a trend towards more of a push to use the PHQ-9 as a standardized screener and quantitative way of measuring patient improvement/outcomes. Or diagnosing depression in patients.

But what concerns me is seeing it used in facilities and settings that serve populations where a number of the PHQ-9 symptom items can easily be symptoms of the populations' common medical conditions. For example, why use the PHQ-9 used in facilities where individuals go inpatient after sustaining an injury? Or simply facilities where the population is much older? Items asking about sleep, fatigue, appetite, movement...these are all common things impacted by an injury, aging, or both.

Am I missing something about the PHQ-9 and perhaps an ability to screen out false positives for falling in a clinical range for depression due to comorbid physically related symptoms?

Sure I would expect most well trained psychologists to be able to differentiate and make a clinical judgement call on PHQ-9 scores viewed alongside patient presentation and recent history. But what about the facilities, bean counters, and others who look at the scores and go "well they're depressed, says right here on the scores!"

It's a decent screener, but it should not be used to track outcomes. It is a terrible tool for that. Unfortunately, the people making these decisions know very little about psychometrics.

As for overlap with other things, that's because people are not using it as a screener. A positive hit on a screener should be followed up with a more comprehensive assessment of some sort to tease that apart. But, you won't see that too much, for various reasons.
 
  • Like
Reactions: 9 users
Members don't see this ad :)
Heh, forget the PHQ-9 this is a very general issue with assessment/diagnosis in the field and a large part of why (for better or worse) initiatives like RDoC and HiTOP came about. Few of our symptoms are specific to the disorders in question and we lack the ability to dig much deeper at the moment. Picture a blood draw showing elevated white blood cells. Infection? Leukemia? Just stress? Its tough.

The general trend has been going on much longer than this and ultimately, I think relying more on quantitative measures is a good thing. The PHQ-9 may not be right for all populations, but what alternatives exist that are good for those populations? GDS is an example, but it sure isn't perfect and that can be inappropriate for older patients in good health with somatic symptoms. For now, I think its just important we rely on people to make good decisions (and collectively as a field, healthcare needs to get better at standing up to the bean counters).
 
  • Like
  • Love
Reactions: 2 users
For folks interested, there's also the BDI-PC. The downside is that I think it's still technically copyrighted, since it's derived from the BDI-II.

But I agree, the problem isn't necessarily the measure itself. It's the inappropriate use of the measure.
 
  • Like
Reactions: 2 users
Believe it or not, the PHQ-9 has better psychometrics / validity and is quicker to administer by nursing staff than the GDS (at least as far as skilled nursing populations are concerned).

I actually have trouble believing this, but Debra Saliba over at RAND has apparently done the work.

When I have my students do the initial screening evals w/ residents I do have them give residents a GDS, because I do feel it complements the PHQ-9 screener the nurses now give them... I personally feel the GDS is better at capturing the more "existential" symptoms of depression. But it's hard to argue with quality data:

Saliba, D., DiFilippo, S., Edelen, M. O., Kroenke, K., Buchanan, J., & Streim, J. (2012). Testing the PHQ-9 interview and observational versions (PHQ-9 OV) for MDS 3.0. Journal of the American Medical Directors Association, 13(7), 618-625.

Saliba, D., Buchanan, J., Caudry, D. J., Denver, C., Dupuis, R., Edelen, M. O., ... & Lead, A. (2008). MDS 3.0.

also (although I haven't seen this - but the RAND people cite it):

Ruckdeschel K, Katz J, Sullivan F, et al. Redesigning the minimum data set: Depression. 2006 Annual Meeting American Association of Geriatric Psychiatry. March, 2006; Puerto Rico.
 
  • Like
Reactions: 3 users
I love how short it is - in a world of rating scales with hundreds of questions, the brief-value added, assessment is a thing of pure ART.

Can you imagine getting great psychometrics from just NINE questions. Pure beauty.

But maybe that's just the measurement nerd in me.
 
  • Like
Reactions: 4 users
For psychiatrists, this is taught to us, and we often use the geriatric depression scale for older patients/patients with multiple medical comorbidities. Especially in the inpatient setting.

I do like the PHQ-9 not to diagnose depression, but as a quick little tool for patients to have a way to sort of monitor their symptoms. It helps because often patients don't know how they should feel on/off medication. In general I diagnose based off clinical interview, any any scale I may use is just to offer some more info, especially for people who are poor historians.
Thanks, I appreciate this. I work more and more with nurses and physicians and the PHQ-9 is more and more prevalent. It's good to hear that it's not always being used as a diagnostic tool in these types of settings.
 
For folks interested, there's also the BDI-PC. The downside is that I think it's still technically copyrighted, since it's derived from the BDI-II.

But I agree, the problem isn't necessarily the measure itself. It's the inappropriate use of the measure.
I've often pushed for the BDI-PC to no avail in settings where it's the powers that be, Medicare, funders, or some combination of the three.
 
Have not read all of the above replies, but in larger organizations (eg, AMC hospital system in which I work, with tons of various specialty and primary/urgent care clinics all across the state) i chalk it up to one of the things that comes with working in such a large and diverse system that has so many factors to consider.

We are required to complete it every however many months at minimum - any provider within certain parameters in that system who sees the person will see the flag in the medical record saying it is due. If there are elevations, document why they’re better explained otherwise, summarize any further conversation/recs around endorsed items warranting inquiry, look closer if the graph shows concerning upward trend over time.

If the provider still has concerns there is an additional protocol (basically risk assessment with some decision tree aspects) and links to other resources / consult etc. I view it largely as liability coverage given the wide range of provider types (as well as training, experience and general quality variability that goes along with that) and a way to ensure that a wider swath of population is at least being asked at all about such things. Reasonable cost / benefit ratio in a big system IMO.

Are there better ways to go about screening? Of course - there are a lot of cons to a blanket approach, plus change in large systems is excruciatingly slow. But given that some of the counties in our catchment area literally do not have a single practicing psychologist (much less a psychiatrist, ha!) and mental health is largely limited to CMHC or short term (unless folks are savvy enough for telehealth with providers elsewhere in the state I suppose) and that for a screener to actually be used consistently across widely varying settings and providers it must be extremely uncomplicated (and presented as expected rather than optional) then it starts to sound less bad. I mean, eh, I’m glad at least someone is asking my grandpa about his mood. Not that he would ever give anyone a straight answer and certainly not anyone he has to see out in the community or family get togethers regularly, but the occasional resulting conversation that is fruitful and that otherwise wouldnt have happened must occur occasionally, right?

can’t say I’ve ever seen many old white rural republican men in my area of the south keen to talk about FEELINGGS, despite laudable efforts- and that’s the image that I think of when considering pros/cons and pragmatic factors of using blanket screeners as a regular practice across settings. I worked with one such man during internship. he will prob always remain one of my favorites and to me illustrates that in large systems a broad approach, however blunt and imperfect, is better than no approach to assessing mental health. This was a different state’s health system and I don’t recall what screener was used but that’s what initially got him in the door - screener or somesuch with his PCP (so he started driving 2 hours each way to a sleep clinic where I was training- another state with a lot of rural areas- then referred to geriatric psychiatrist and idk what else, and last I heard he had undergone ECT and was living his best life in decades - since the 70s- riding a motorcycle around with other vets with a pet cat in tow)

On a related note there is a small group in our psychiatry department looking to improve and change screening within the broad system which is exciting. Also- change is slooooow in big sprawling systems. but still exciting.
 
Not a big fan of these types of more and more mandated screening devices that are more likely to increase prescriptions of SSRIs for mild to moderate cases. I guess if I had a vested interest in a pharmaceutical company maybe I would be more on board. To me these screeners are a symptom of a major systemic problem more than they are a solution to anything. They are also a way for people/administrators/ politicians/bureaucrats to feel like or say they are doing something to help treat mental health while our population continues to get sicker.
 
  • Like
Reactions: 1 users
Not a big fan of these types of more and more mandated screening devices that are more likely to increase prescriptions of SSRIs for mild to moderate cases. I guess if I had a vested interest in a pharmaceutical company maybe I would be more on board. To me these screeners are a symptom of a major systemic problem more than they are a solution to anything. They are also a way for people/administrators/ politicians/bureaucrats to feel like or say they are doing something to help treat mental health while our population continues to get sicker.
When I was in training we relied upon verbal ratings from the client at beginning of session (called the 'mood checks' in CBT) where they would simply rate their depression and/or anxiety on a 1-10 scale. Anyone know if there's been any research into the marginal utility or improvement on practical utility of use of the PHQ-9 (to track 'outcome') vs. simply using 1-10 mood check ratings over the course of therapy? I'm thinking in terms of the whole 'cost-benefit' equation. 'Mood' (depression/anxiety) is an inherently subjective phenomenon. It strikes me as odd that--in the context of everyday psychotherapy process--something like a 1-10 rating from clients over the course of therapy to track improvement would be somehow poo-pooed as an 'inadequate' or 'unscientific' or 'unreliable' means of conveniently tracking this symptom over the course of psychotherapy treatment.

It's also interesting that I participated in NIMH funded treatment outcome studies on the treatment of depression and, back in the day, even in such an expensive RTC type study, we only did 'formal assessment' (BDI, Hamilton-D) of 'depression' as an outcome measure at a few time points during the therapy (pre-treatment, first session, mid-point, end, and 3 and 6 month followup. This was in an actual RTC that was later published. Nowadays, we're expected to do an 'objective [ahem, cough]' measurement with a PHQ-9 or some other measure every single session (or every other session). It strikes me as ridiculous, especially considering the fact that in many settings/populations there is a tremendous effect of 'response bias' (e.g., motivation to appear 'sicker' or 'less sick' depending on symptoms) that probably accounts for more variance than actual variation in the phenomenon that we want to examine (changes in mood state).

The recent trends (over)emphasizing quantification/measurement of subjective mood states in routine clinical practice seem to be accelerating. I mean, I'm all for the notion of valid measurement as part of taking a scientific approach to psychological practice, but often this kind of thing is just 'mandated' in a top-down authoritarian fashion in order to make it 'look like' 'something is *being done* [TM] by *the people in charge* to effect *continuous improvement* [also TM].
 
  • Like
Reactions: 1 users
Members don't see this ad :)
but often this kind of thing is just 'mandated' in a top-down authoritarian fashion in order to make it 'look like' 'something is *being done* [TM]

"It's evidence-based." - every dude up top who doesn't really know what that means. Hell...the more I think about it, i don't think i really know what it means (at least in terms of the context these people use it, other than PR).
 
  • Like
  • Love
Reactions: 3 users
"It's evidence-based." - every dude up top who doesn't really know what that means. Hell...the more I think about it, i don't think i really know what it means (at least in terms of the context these people use it, other than PR).
According to the APA, 'evidence-based' psychotherapy is pretty straightforward as a concept: it is the combination of, basically, (a) the scientific literature, (b) clinician expertise/competence, and (c) patient values/preferences. Where those three intersect is 'evidence-based psychotherapy.'

I also read an article on 'evidence-based assessment' that was pretty good a couple of years ago and I remember it being pretty sophisticated/nuanced and I'm pretty sure that it didn't say that self-report symptom checklists utilizing a Likert scale for all items is the 'holy grail' for evidence based assessment. Heck, I even seem to recall them emphasizing that 'evidence-based assessment' is a process rather than a specific instrument or type of instrument. Like a hypothetico-deductive process of formulating specific hypotheses, testing those hypotheses (in good faith), listening to the answer, and revising your working model (or individual case formulation).

I think that the faith in (and enthusiasm for) the universal, routine (daily? hourly?) utilization of symptom self-report checklists as some sort of direct, infallible measure of the phenomenon of interest in our field is a sad fad that has developed over the course of my career. I think assessment used to be understood in a broader sense (and by competent clinicians in routine practice, still is). I think that the further anyone gets away from routine contact with clients, the more their 'faith' (see Jacob Cohen's concept of methodolotry--the worship of method) in symptom self-report checklists.
 
  • Like
Reactions: 2 users
I see more value in requiring PHQ-9's every 20 minutes than I do in even requiring one of the 10+ page treatment planning/therapy goals forms required by several practicums I did and my internship. I hear the VA has a particularly fun version of these that I was fortunate to avoid.

On a more serious note, I do think this is always a bit of a tug of war and I do see merit on both sides of the equation. From a public health perspective, I'm all for increasing assessment. I'd posted a (relatively) comprehensive battery I use in my research studies a couple years back that was ~100 questions (incl. the PHQ-9) and I'm hoping to one day get that integrated into routine primary care practice to hopefully reduce the number of people slipping through the cracks. It would need to be a rule...because otherwise it wouldn't happen.

The problem arises when people who actually do know a thing or two about this stuff aren't given the flexibility to make decisions about things they know better than the bean counters. And that oftentimes the mandates are coming down for reasons other than what is best for the patients.
 
  • Like
Reactions: 3 users
When I was in training we relied upon verbal ratings from the client at beginning of session (called the 'mood checks' in CBT) where they would simply rate their depression and/or anxiety on a 1-10 scale. Anyone know if there's been any research into the marginal utility or improvement on practical utility of use of the PHQ-9 (to track 'outcome') vs. simply using 1-10 mood check ratings over the course of therapy? I'm thinking in terms of the whole 'cost-benefit' equation. 'Mood' (depression/anxiety) is an inherently subjective phenomenon. It strikes me as odd that--in the context of everyday psychotherapy process--something like a 1-10 rating from clients over the course of therapy to track improvement would be somehow poo-pooed as an 'inadequate' or 'unscientific' or 'unreliable' means of conveniently tracking this symptom over the course of psychotherapy treatment.

Isn't one of the basic lessons of psychophysics over the last century and a half that while the absolute ratings vary a lot between people, magnitude estimation regarding their direct experience is something people are actually reasonably good at? Like, maybe I rate a light source intensity as 50 and you rate it as 5000, but if we increase the luminance by 50% both of us are going to give it higher ratings in roughly the same proportions. Seems odd to discard because it doesn't have an acronym.
 
  • Like
Reactions: 3 users
Heh, forget the PHQ-9 this is a very general issue with assessment/diagnosis in the field and a large part of why (for better or worse) initiatives like RDoC and HiTOP came about. Few of our symptoms are specific to the disorders in question and we lack the ability to dig much deeper at the moment. Picture a blood draw showing elevated white blood cells. Infection? Leukemia? Just stress? Its tough.

The general trend has been going on much longer than this and ultimately, I think relying more on quantitative measures is a good thing. The PHQ-9 may not be right for all populations, but what alternatives exist that are good for those populations? GDS is an example, but it sure isn't perfect and that can be inappropriate for older patients in good health with somatic symptoms. For now, I think its just important we rely on people to make good decisions (and collectively as a field, healthcare needs to get better at standing up to the bean counters).
Love your comment/analysis.

I also think that the entire field is on the cusp of re-conceptualizing 'evidence-based psychotherapy' as a MUCH broader playing field or construct than most current systems view it to be. In the VA system, for whatever sociocultural reasons (and I can speculate deeply on these), the term 'evidence-based therapy' (or EBT) has become (in the minds of many) exclusively synonymous with the 'protocol-for-syndrome'/manualized, alphabet-soup-worshiping, fixed-structure/length, pre-determined-agendas, 'recipe' approach to psychotherapy implementation. This is seriously myopic and seriously incorrect. The definition of evidence-based therapy (according to ANY published literature I have EVER encountered [including the APA definition]) has NEVER limited the concept to merely manualized/alphabetized treatment protocols. Like...never. Steven Hayes and Stephan Hofmann are laying out a 'process-based' therapy future that is far more transdiagnostic/idiographic and more in line with a classical 'evidence-based' functional assessment/analysis model from 'old fashioned' CBT case formulation approaches. However, it does require thinking clinicians and a lot more effort than simply 'diagnose and plug/chug the protocol' approaches. Standard disclaimer: I routinely prioritize evidence-based protocols (like CBT-i, PE, CPT, etc.) for clients who are ready for them, receptive to them, and are good candidates. They definitely have their revered place in the palette of treatment options.

The organizational factors behind the exclusive reliance on a protocol-for-syndrome approach (and myopia) likely stem from the 'medical model' [metaphor?] being predominant as well as the frankly, at times, sadomasochistic, authoritarian, top-down 'marching orders command and control' model of 'administration/supervision' prevailing throughout the organization throughout its entire history. The intra-personal/personal factors I will leave the reader to unravel according to his/her lived experiences within the system, lol.

Change is coming to the field and, eventually, to the VA regarding how we conceptualize and implement 'evidence-based therapy.' Unfortunately, I predict that this change will be exceedingly slow and excruciatingly painful (especially for the providers) and likely lag at least 10-15 years behind innovations that will occur within grad school programs as well as the field at large. It will happen long after I retire (20 years from now...maybe?).

When we still have Joint Commission (recently) declaring it 'illegal' for us to have a drop-down menu within our progress note templates to efficiently indicate the clinical observation of 'no suicidal ideation expressed during today's session' for a client--they said it was not 'evidence-based' assessment...pointing to a preference for the C-SSRS (which is just 9 yes/no questions)--then we have reached a level of concrete stupidity that I never, even in my wildest and most cynical days before, would have dreamed possible...even for VA. Yes, you read that right...Joint Commissars (I mean 'Commission [of atrocities]') has declared it somehow 'outside the scope of acceptable practice' of an individual doctoral-level provider to simply note in his/her chart for a specific client that he/she is not currently suicidal without first implementing an 'evidence-based' assessment/script such as the C-SSRS in every session. It is obvious to any sophisticated/experienced clinician how robotic adherence to this mandate would have far more negative impacts than positive, especially on the therapeutic alliance. [provider]: 'Are you having any thoughts of harming yourself today or have you had any of these thoughts since our last session? [patient]: 'No. I have never had thoughts of killing myself...I couldn't do it, I love my kids too much and I wouldn't hurt them like that...besides, I have hope now that life is definitely worth living and I'm excited to get on with it' [provider]: 'Hmm...okay, I hear ya on that. But, just to be sure [and to satisfy the Joint Commissars], we're gonna need to go through some specific questions on that. Would that be okay?' [patient], 'Umm...okay, if you say so but I think I've made it pretty clear that I'm not suicidal, haven't even thought about it, like...ever, and don't plan to do anything.' [provider] Okay. Thank you for your service/compliance with Joint Commisar and Big Brother...let's begin...[robotically administers the C-SSRS].'

'Thank you, Joint Commissars...may I have another?'


This level of concrete stupidity and authoritarian boot-stomping-on-the-face-of-providers-forever approach is wholly incompatible with where the field is headed and will take a LOT of blood/sweat/tears, gnashing of teeth, and will require decades of patient, careful, explication to the 'powers-that-be' of how not having to robotically read from a script doesn't mean that we are 'abandoning' science or best practices or 'evidence-based' approaches to therapy or assessment to turn around.
 
  • Like
Reactions: 1 users
Just recently went through a JCAHO accreditation. They also forced us to a risk measure. We implemented the ASQ because it was simplest to use. What is funny is that it says if there is a positive then refer to mental health provider. We took that to mean we would just have a staff fill it out. At least that way it isn’t wasting my time since I already do a risk assessment on every patient I meet. Sometimes that involves not even asking them since there are absolutely no indicators of risk. I dont ask every patients about psychosis either. It’s called clinical judgement. If I miss something, I’m the one that has to live with it. We are the ones making life or death decisions every day. I am so glad to be out of the system as much as possible.

I think there is something really unhealthy about the whole approach of trying to make sure people don’t do a bad job as opposed to trusting that clinicians are competent and trying to help their patients. It’s not like any of these systems are actually going to stop the crappy colleagues that we all know and love. In fact, it seems like these approaches tend to elevate the incompetent because they don’t have to really help the client as much as check the right boxes.

It is also devaluing, disempowering, and dehumanizing of patients. One part of that is trusting that the patient will tell me about suicide and if they don’t maybe that is on them. Also when they don’t tell, it’s usually because of the robotic implementation of suicide protocols anyway. Patients who have been ground up a bit by the system aren’t going to trust us. I explicitly tell some of my clients that it is up to them to tell me about there SI and that they need to trust that I won’t overreact and that 95% the time I won’t hospitalize unless we both agree that it makes sense, I am not lying either. That has been exactly my experience when I talk to a client one in one about what the best plan is and they realize that I don’t give a crap what anyone else outside of the room says or does. That hour in the room is not just idle chit chat it’s about developing a working alliance aka trusting relationship between two human beings. That is what saves lives not some friggen piece of paper approved and promoted by bureaucratic mindset type people.
 
  • Love
  • Like
Reactions: 1 users
Just recently went through a JCAHO accreditation. They also forced us to a risk measure. We implemented the ASQ because it was simplest to use. What is funny is that it says if there is a positive then refer to mental health provider. We took that to mean we would just have a staff fill it out. At least that way it isn’t wasting my time since I already do a risk assessment on every patient I meet. Sometimes that involves not even asking them since there are absolutely no indicators of risk. I dont ask every patients about psychosis either. It’s called clinical judgement. If I miss something, I’m the one that has to live with it. We are the ones making life or death decisions every day. I am so glad to be out of the system as much as possible.

I think there is something really unhealthy about the whole approach of trying to make sure people don’t do a bad job as opposed to trusting that clinicians are competent and trying to help their patients. It’s not like any of these systems are actually going to stop the crappy colleagues that we all know and love. In fact, it seems like these approaches tend to elevate the incompetent because they don’t have to really help the client as much as check the right boxes.

It is also devaluing, disempowering, and dehumanizing of patients. One part of that is trusting that the patient will tell me about suicide and if they don’t maybe that is on them. Also when they don’t tell, it’s usually because of the robotic implementation of suicide protocols anyway. Patients who have been ground up a bit by the system aren’t going to trust us. I explicitly tell some of my clients that it is up to them to tell me about there SI and that they need to trust that I won’t overreact and that 95% the time I won’t hospitalize unless we both agree that it makes sense, I am not lying either. That has been exactly my experience when I talk to a client one in one about what the best plan is and they realize that I don’t give a crap what anyone else outside of the room says or does. That hour in the room is not just idle chit chat it’s about developing a working alliance aka trusting relationship between two human beings. That is what saves lives not some friggen piece of paper approved and promoted by bureaucratic mindset type people.
That's what has always really burned me up about this approach.

Ostensibly, these mandated procedures are pushed forward upon the rationale that they are 'ensuring competent practice' or some other such nonsense.

As I've always said, if the actual problem is incompetent doctoral-level mental health providers, then you've got a problem on your hands that no 'checklist,' algorithm, or manualized/operationalized written approach is going to meaningfully address...let alone 'solve.'

It's all for show. It's 'easy.' It's publicly conspicuous. It's 'laudable.' It's (cringe) 'evidence-based' (because we say so...as if the term has become some sort of magical *incantation* that can be invoked (usually in a smug, self-assured tone) in order to immediately and completely halt any inquiry into its possible relevance, reliability, or wisdom).

"We'll write a policy that mandates X. See...problem solved. Now we have competent clinical practice. It's guaranteed (until the next committee visit and their new recommendations)."

Actually spending time getting to know your clinicians, their practices, their strengths/weakness, what they are actually doing in their caseloads, their theoretical orientations....or conducting MEANINGFUL peer review processes takes time/money and actual expertise and its....I mean it's...it's too hard. And people might actually talk back and debate things and even successfully use logic/evidence to call into question the wisdom of those in charge. Can't have that.

Nah...we'll just write a policy/procedure and make sure to include enough overkill in it so that it cannot be questioned.
 
  • Like
Reactions: 1 user
How evidence-based is universal suicide risk screening anyway if our accuracy at predicting suicide is at about chance levels? 😏
 
  • Like
Reactions: 1 users
The organizational factors behind the exclusive reliance on a protocol-for-syndrome approach (and myopia) likely stem from the 'medical model' [metaphor?] being predominant as well as the frankly, at times, sadomasochistic, authoritarian, top-down 'marching orders command and control' model of 'administration/supervision' prevailing throughout the organization throughout its entire history.

...and yet....fun thought experiment. When was the last time you saw an even semi-complicated psychiatric patient who was not on a unique mishmash of medications that had never been tested together in a trial? For that matter, pick any difficult-to-control non-psychiatric chronic health condition and ask the same question.

Not arguing the merits of doing it or trying to indict physicians on this. Just pointing out that I'm hesitant to even call it the medical model because frankly... medicine doesn't seem beholden to it. I like the process therapy movement. It's funny that we think of it as new. I'd describe Aaron Beck's early work as extremely process-focused. In fact, in many ways moreso than what I see from the current process-focused crowd. I also think there is absolute merit to trying someone on a standard course of BA for depression before mucking around with anything else. And doing our best to motivate PTSD patients into PE/CPT versus throwing up our hands when they express concern at intake and saying "They won't tolerate it, guess we better do long-term supportive therapy or psychoanalysis instead." Once that has happened and failed (which sadly...is still quite rare in this field), I think that's where real expertise is needed. And where we need better science to guide us.
 
  • Like
Reactions: 2 users
This part rings so true at VA. It's almost as if readiness to change lies on a continuum and as if veterans who show up to intake are distributed along that continuum. Maybe even approximating a normal distribution (with most people somewhere in the middle on motivation or readiness to change with outliers at either high or low extreme). The VA system, particularly in specialty clinics, appears geared toward pretending that only the EXTREME ends of the distribution (i.e., those at extremely LOW and those at extremely HIGH readiness-to-change) exist and the task of a single-session intake is to sort them into those two very disparate groups.
 
  • Like
Reactions: 1 user
Top