Busy week, late to the party. I agree with
@Med Ed that this reads like something written by a committee. Lots of feel good statements and goals that are hard to disagree with, yet probably impossible to implement in reality.
#3 - A single, comprehensive electronic professional development career planning resource for students will provide universally accessible, reliable, up-to-date, and trustworthy information and guidance. This is a great example. Sure, it sounds great to have some single career planning (presumably web based) resource. But do you think we can really build a one-size-fits-all resource that somehow is going to really work for everyone? How is that really supposed to work? It's going to be way to general to be helpful, and not take into consideration individual performance and attributes -- because it can't. Sounds good, but this just isn't possible. Or it is, and it's called SDN.
7. UME and GME educators, along with representatives of the full educational continuum, should jointly define and implement a common framework and set of outcomes (competencies) to apply to learners across the continuum from UME to GME. This is another dubious goal. Trying to define competencies in med ed is a pipe dream. Competencies work well for well defined task-based assessments, they do not work well in more vague professional skill based assessments. For example, can we define the competency of taking a history? Sure, if we define it based on some checklist of things to ask. But a good history isn't a checklist -- it's knowing when to dig deeper, when things are distractors, etc. These types of things are extremenly difficult to assess on a competency scale -- and very difficult to do "at scale" -- i.e. with 100+ students rotating through. Sounds good, but not possible.
Perhaps a better way of explaining my thinking -- Competency scales are good for measuring things that are linear. But they do not work, at all, for things that are non-linear. How well a grocery clerk can check out items can easily be measured on a competency scale. Artist quality can't be measured using a competency scale. Physicians fall somewhere in between.
#8, 9, and others are similar -- we should have better tools, more faculty development, etc, same deal. Sound good. Like world peace. Definitely laudable goal. Not terribly practical. Definitely worth working towards. Perhaps not solvable.
#21 - a database of program applicants and interviews, is something I completely agree with. This is (relatively) easy to do. How much would it really help -- can't tell. Residency Explorer is a good first step, but I agree this is based only upon matched candidates, and adding those invited to interview would also probably be helpful. But I don't think it would actually change any behavior.
#22 - discrete fields in the MSPE. Sounds good, but really impossible unless we standardize medical school performance. If all schools were required to put their students into quartiles, then having that in a standardized format would be very helpful. But if schools can say that 80% of their students are Outstanding and the other 20% are Superior, how are you going to "standardize" that? How does this lead to a more holistic review? Seems like it would lead to more automatic filtering. The only thing that would be standardizeable, at all, would be clerkship grades, shelf exams, and overall quartile.
#23 - meaningful filters. Sure. Like what? Filters that match program mission. Like what? My program's mission is to "train good doctors".
#24 - Report USMLE and COMLEX as a single percentile. I feel this is the worst suggestion of the bunch. The graph in Carmody's post of the most recent comparison of COMLEX to USMLE shows what every other comparison has shown -- in general, students score lower on the USMLE than the CMLEX. There's lots of scatter, and there will be plenty of people (some whom are certain to comment on this post) who will do better on USMLE. But there is a very clear overall correlation. Treating the two equally assumes that the two exams are measuring the same thing, and that the populations of students taking the exams are equivalent. Both of those may not be true. In fact, the graph shows one very big inconvenient truth -- the minimum pass for COMLEX is 400, and USMLE is 194. Draw those lines on that graph. How many people failed COMLEX but passed the USMLE? Two. How many people passed COMLEX but failed the USMLE. Quite a few. Treating the exams equally is ridiculous. Perhaps we shouldint use these exams at all for decisions -- just pass/fail. That's a discussion I'm willing to have. But even at that, it's not clear that a pass on each exam means the same thing. This one has Gimple's fingerprints all over it.
#25 - totally agree, programs should not offer more interviews than slots. Getting a uniform offer date is going to be difficult.
#26 - I think programs could offer both in person and virtual interviewing options and be fair about it.
#28 - Early match - I've already written about my ideas about that here.
#35 - an ILP for each intern. You're kidding, right? How am I supposed to know what you need if you haven't started working yet? The medical schools are going to tell me?
#40 - this is probably the best recommendation of them all. It's easy to implement, and hardly costs anything. It also will affect only a small number of people, so impact is muted.
Overall, lots of things that sound good, but seem impossible to implement. A few ideas that are practical and would have a small benefit.