Jump to content
Premed 101 Forums

Annual Specialty Competitiveness Stats


Recommended Posts

As I have no experience on admissions committees, I'd like to think that there are differences between quality of references letters, performances in clerkship rotations (which would be objective), and perceived interested in the specialty (research, electives, indicated in LOR, etc), and most importantly in my opinion, quality of the interview (i.e., is this person a sociopath? this person seems to 'get it'?)

 

What am I missing here? If there's more than enough good candidates then you just rank them all and take who you get?

Link to comment
Share on other sites

Though I wouldn't say clerkship rotation performance is entirely objective, yes, of course, there are differences in the overall strengths of candidates' applications. Don't forget the on-site elective (like an extended two-week long interview) - for some fields, this can be extremely important in helping the program decide how much they'd like to work with you.

Link to comment
Share on other sites

As I have no experience on admissions committees, I'd like to think that there are differences between quality of references letters, performances in clerkship rotations (which would be objective), and perceived interested in the specialty (research, electives, indicated in LOR, etc), and most importantly in my opinion, quality of the interview (i.e., is this person a sociopath? this person seems to 'get it'?)

 

What am I missing here? If there's more than enough good candidates then you just rank them all and take who you get?

 

Most of these things are very good and therefore the same. C'est la vie. 

Link to comment
Share on other sites

residency matching in Canada is very much throwing a dart after some beer. 

Definitely we should keep reference letters, interview, personal statement, but really we need a standardized exam to compare everyone's academic abilities. 

 

In the current system, the only way a program will be able to tell whether you're good clinically/academically is through reference letters. And if you didn't happen to get an elective at the program you want to with the right person as your staff working the right number of days to adequately assess you, you got no chance. 

 

I'm sure a lot of people don't get matched to a certain program at a certain city just because of the fact that they got stuck with the wrong elective, with the wrong staff at the wrong time because that was the only elective available at the time of your application.

 

We need a standardized exam. We can start with one like the USMLEs and over time rigorously test the exam to make sure it can be correlated to clinical performance during residency.

 

Heck, let's implement changes to the LMCC so it actually matters for residency application, because currently it serves 0 purpose at all except to grab at our cashbags.

Link to comment
Share on other sites

As I have no experience on admissions committees, I'd like to think that there are differences between quality of references letters, performances in clerkship rotations (which would be objective), and perceived interested in the specialty (research, electives, indicated in LOR, etc), and most importantly in my opinion, quality of the interview (i.e., is this person a sociopath? this person seems to 'get it'?)

 

What am I missing here? If there's more than enough good candidates then you just rank them all and take who you get?

 

missing is too strong a word, as on paper it makes sense. In reality though all the reference letters look pretty much the same (people won't write you a letter usually if they think you suck), all the clerkship rotation evals are pretty much the same (and some schools let you pick your best X of Y evals to post so anything bad is washed out - pretty much useless), and everyone interested in a field does pretty much the max electives in it (why wouldn't you), does some form of research (and there is an element of luck with research in terms of whether you publish etc - plus it is just a imperfect proxy for what we really want to know - how smart/hard working are you?).

 

When you review this apps basically the top 10% shine out, the bottom 10% are obvious and the middle has this great mass of people that all look pretty much the same. Since even in competitive specialties the a large fraction match you have to some how rank the middle pack (if you just interview the top 10% I guarantee you will have unfilled spots, which really sucks for a program - people don't often realize that :) ).  The problem is how to rank the middle large group, and THAT is a really hard problem and it gets really subjective, vague.....and good people don't match. This probably is probably even worse in fields where people aren't "gunning" as it were because the entire point of gunning for a field is to shine and thus be accepted.

Edited by rmorelan
Link to comment
Share on other sites

residency matching in Canada is very much throwing a dart after some beer. 

Definitely we should keep reference letters, interview, personal statement, but really we need a standardized exam to compare everyone's academic abilities. 

 

In the current system, the only way a program will be able to tell whether you're good clinically/academically is through reference letters. And if you didn't happen to get an elective at the program you want to with the right person as your staff working the right number of days to adequately assess you, you got no chance. 

 

I'm sure a lot of people don't get matched to a certain program at a certain city just because of the fact that they got stuck with the wrong elective, with the wrong staff at the wrong time because that was the only elective available at the time of your application.

 

We need a standardized exam. We can start with one like the USMLEs and over time rigorously test the exam to make sure it can be correlated to clinical performance during residency.

 

Heck, let's implement changes to the LMCC so it actually matters for residency application, because currently it serves 0 purpose at all except to grab at our cashbags.

 

I completely agree with the fact that electives, where and with whom you luck out in setting them up, matters greatly towards your chances of matching to your preferred sites. An unfortunate reality of how the system works currently.

 

However, I disagree with a USMLE style standardized MCQ exam automatically being a better option. If we look at a surgical specialty like plastics, there are so many aspects to a career in that field that no MCQ knowledge regurg will ever assess (as is true in many other fields besides surgery, but I think surgery exemplifies this). Elective evaluations/letters correlate better with the skills, work ethics, and team player traits than a USMLE score. And like I said before, practicing medicine isn't rocket science. You don't need super high IQs. And getting into medical school in the first place already means you have above average academic intelligence. Worth ethic, interpersonal skills, and emotional intelligence are more important and not found anywhere on a transcript or standardized exam score. IMO. 

Link to comment
Share on other sites

I completely agree with the fact that electives, where and with whom you luck out in setting them up, matters greatly towards your chances of matching to your preferred sites. An unfortunate reality of how the system works currently.

 

However, I disagree with a USMLE style standardized MCQ exam automatically being a better option. If we look at a surgical specialty like plastics, there are so many aspects to a career in that field that no MCQ knowledge regurg will ever assess (as is true in many other fields besides surgery, but I think surgery exemplifies this). Elective evaluations/letters correlate better with the skills, work ethics, and team player traits than a USMLE score. And like I said before, practicing medicine isn't rocket science. You don't need super high IQs. And getting into medical school in the first place already means you have above average academic intelligence. Worth ethic, interpersonal skills, and emotional intelligence are more important and not found anywhere on a transcript or standardized exam score. IMO. 

 

I don't think a USMLE-style exam is necessarily a better option than the common current evaluation schemes, but it serves as a reasonable compromise between three competing preferences:

 

1) De-emphasizing unproductive testing (i.e. why we eliminated grades in the first place)

2) Providing programs with some sort of quantitative measure to look at applicants (so they feel the need to do things like look at undergrad marks)

3) Providing students with a goal to work towards with a meaningful impact on their residency chances

 

We got rid of grades in medical school because they were time-consuming and stress-inducing, yet were not always that relevant to the actual practice of medicine - at least not as relevant as other factors like social skills and work ethic. Some programs have balked at the notion of not having a quantitative measure to the point that they're asking for undergraduate grades - numbers which are even further removed from the ability of a candidate to be a good physician than medical school marks were. Lastly, from a student perspective, it is somewhat troubling that there's not too much I can do to separate myself from other candidates. Part of this is because programs haven't placed much/any weight on what students can devote their time to - programs don't seem to care about being active or productive in the school/community, for example - but the other part is because there isn't anything outside of clerkship/electives that programs can tell students to excel at.

 

A USMLE-style exam - while it would be a bit of a waste of time/energy, wouldn't capture the majority of what it means to be a good physician, and would only provide a single outlet for medical students to focus their efforts to prove they're capable - it'd at least accomplish the goal of providing a quantitative assessment in a standardized, unobtrusive manner. It certainly couldn't be worse than looking at undergrad transcripts.

Link to comment
Share on other sites

I don't think a USMLE-style exam is necessarily a better option than the common current evaluation schemes, but it serves as a reasonable compromise between three competing preferences:

 

1) De-emphasizing unproductive testing (i.e. why we eliminated grades in the first place)

2) Providing programs with some sort of quantitative measure to look at applicants (so they feel the need to do things like look at undergrad marks)

3) Providing students with a goal to work towards with a meaningful impact on their residency chances

 

We got rid of grades in medical school because they were time-consuming and stress-inducing, yet were not always that relevant to the actual practice of medicine - at least not as relevant as other factors like social skills and work ethic. Some programs have balked at the notion of not having a quantitative measure to the point that they're asking for undergraduate grades - numbers which are even further removed from the ability of a candidate to be a good physician than medical school marks were. Lastly, from a student perspective, it is somewhat troubling that there's not too much I can do to separate myself from other candidates. Part of this is because programs haven't placed much/any weight on what students can devote their time to - programs don't seem to care about being active or productive in the school/community, for example - but the other part is because there isn't anything outside of clerkship/electives that programs can tell students to excel at.

 

A USMLE-style exam - while it would be a bit of a waste of time/energy, wouldn't capture the majority of what it means to be a good physician, and would only provide a single outlet for medical students to focus their efforts to prove they're capable - it'd at least accomplish the goal of providing a quantitative assessment in a standardized, unobtrusive manner. It certainly couldn't be worse than looking at undergrad transcripts.

 

they do care to a point about activity - but yeah only to a point. I get it - I was pretty involved at my school (well our school :) ), but really those skills aren't exactly directly related to most of what I do in my residency. They would rather have someone that is super focused on studying than say having the ability to run a student council etc.

Link to comment
Share on other sites

they do care to a point about activity - but yeah only to a point. I get it - I was pretty involved at my school (well our school :) ), but really those skills aren't exactly directly related to most of what I do in my residency. They would rather have someone that is super focused on studying than say having the ability to run a student council etc.

 

If the goal is to find someone who's hard working, those sorts of activities seem like the main thing to look at - the rest of school's mostly about showing up and doing an adequate amount of reading, neither of which are overly hard to accomplish.

 

Still, I'm not arguing for a much greater focus on ECs in the evaluation process - it's too easy to make it look like you're active on paper without actually being active. I'd rather not have additional competition to do the activities I'm currently involved in, all of which I've done because I enjoy the work involved and, somewhat contrary to your statement, because I believe they'll have some relevance to being a resident and practice beyond residency. There are definitely a few people - not many, but a few - who try to get a ton of positions while doing the minimum work possible for each position, and I'd rather not give more people the incentive to do this. Makes it too hard to get things done.

Link to comment
Share on other sites

If the goal is to find someone who's hard working, those sorts of activities seem like the main thing to look at - the rest of school's mostly about showing up and doing an adequate amount of reading, neither of which are overly hard to accomplish.

 

Still, I'm not arguing for a much greater focus on ECs in the evaluation process - it's too easy to make it look like you're active on paper without actually being active. I'd rather not have additional competition to do the activities I'm currently involved in, all of which I've done because I enjoy the work involved and, somewhat contrary to your statement, because I believe they'll have some relevance to being a resident and practice beyond residency. There are definitely a few people - not many, but a few - who try to get a ton of positions while doing the minimum work possible for each position, and I'd rather not give more people the incentive to do this. Makes it too hard to get things done.

 

I was being pretty specific in my point :) As a radiology resident a lot of what I did before is not as relevant as perhaps other things. For my own personal interests and future career they are definitely useful. Plus like you I would have done them anyway - the motivation had nothing to do with getting in residency. Ha, I just like making things better where ever possible.

Link to comment
Share on other sites

  • 3 weeks later...

I don't think a USMLE-style exam is necessarily a better option than the common current evaluation schemes, but it serves as a reasonable compromise between three competing preferences:

 

1) De-emphasizing unproductive testing (i.e. why we eliminated grades in the first place)

2) Providing programs with some sort of quantitative measure to look at applicants (so they feel the need to do things like look at undergrad marks)

3) Providing students with a goal to work towards with a meaningful impact on their residency chances

 

We got rid of grades in medical school because they were time-consuming and stress-inducing, yet were not always that relevant to the actual practice of medicine - at least not as relevant as other factors like social skills and work ethic. Some programs have balked at the notion of not having a quantitative measure to the point that they're asking for undergraduate grades - numbers which are even further removed from the ability of a candidate to be a good physician than medical school marks were. Lastly, from a student perspective, it is somewhat troubling that there's not too much I can do to separate myself from other candidates. Part of this is because programs haven't placed much/any weight on what students can devote their time to - programs don't seem to care about being active or productive in the school/community, for example - but the other part is because there isn't anything outside of clerkship/electives that programs can tell students to excel at.

 

A USMLE-style exam - while it would be a bit of a waste of time/energy, wouldn't capture the majority of what it means to be a good physician, and would only provide a single outlet for medical students to focus their efforts to prove they're capable - it'd at least accomplish the goal of providing a quantitative assessment in a standardized, unobtrusive manner. It certainly couldn't be worse than looking at undergrad transcripts.

 
I agree with you, a simple exam would not be the perfect solution. Sure, it would help programs review applicants, and because of this it would be useful from my prospective. But I freely admit that a single exam could not reflect the overall clinical gestalt of an individual.
 
To do that you need much more than a simple exam. 
 
Some of the more interesting work within postgraduate medical education is the notion of self-guided, competency-based evaluation and promotion. What this means in plain English is having a trainee evaluated by a preceptor on the job in one particular skill at a time. Such a skill could be the cardiovascular exam, time management, or breaking bad news, or developing a treatment plan for CHF etc. Some of these skills may be practical, some may be more theoretical, all are deemed necessary. If a trainee is deemed competent in that particular skill the expectations are advanced and reevaluated in this respect in the future. The scale upon which the trainee is evaluated is usually one which resembles: incompetent, competent for training level, performing  above training level. Such a system results in promotion and increased responsibility being based upon actual demonstration of a skill set with a very specific set of criteria determining performance. To do this requires the identification of the numerous skills required in a physician and the important components and tasks within the skill being evaluated. Understandably, this is a lot of work, and one of the reasons why rolling out such systems takes a significant amount of time.
 
Interestingly, such a system could allow promotion of the trainee based not upon time in training but upon demonstration of a skill set. This is much more of an apprenticeship type of model. Theoretically an enlightened trainee would progress rapidly, or conversely the trainee who struggles may require more time regarding promotion.
 
Such systems of evaluation are currently under development in some postgraduate residency programs. Some are currently piloting such systems.
 
It is possible that such evaluation systems may eventually be used within undergraduate MD programs. If this occurred it would provide much more information to residency programs during CaRMS. The reason for this is that such evaluation systems create a portfolio of evaluations on very specific and necessary topics. Such a portfolio would give a much more detailed clinical assessment of the applicant. It would also allow programs to tease out particular skills which are highly regarded within a particular specialty. For example, procedural skills would probably be more highly emphasized within a surgical specially. It would also allow programs to identify applicants who are performing above their current level of training, a desirable trait as such a resident is likely to be easy to train and low maintenance. 
 
Unfortunately, such a system being standard across all medical schools is far off within the undergraduate realm. 
Edited by rogerroger
Link to comment
Share on other sites

 

 
I agree with you, a simple exam would not be the perfect solution. Sure, it would help programs review applicants, and because of this it would be useful from my prospective. But I freely admit that a single exam could not reflect the overall clinical gestalt of an individual.
 
To do that you need much more than a simple exam. 
 
Some of the more interesting work within postgraduate medical education is the notion of self-guided, competency-based evaluation and promotion. What this means in plain English is having a trainee evaluated by a preceptor on the job in one particular skill at a time. Such a skill could be the cardiovascular exam, time management, or breaking bad news, or developing a treatment plan for CHF etc. Some of these skills may be practical, some may be more theoretical, all are deemed necessary. If a trainee is deemed competent in that particular skill the expectations are advanced and reevaluated in this respect in the future. The scale upon which the trainee is evaluated is usually one which resembles: incompetent, competent for training level, performing  above training level. Such a system results in promotion and increased responsibility being based upon actual demonstration of a skill set with a very specific set of criteria determining performance. To do this requires the identification of the numerous skills required in a physician and the important components and tasks within the skill being evaluated. Understandably, this is a lot of work, and one of the reasons why rolling out such systems takes a significant amount of time.
 
Interestingly, such a system could allow promotion of the trainee based not upon time in training but upon demonstration of a skill set. This is much more of an apprenticeship type of model. Theoretically an enlightened trainee would progress rapidly, or conversely the trainee who struggles may require more time regarding promotion.
 
Such systems of evaluation are currently under development in some postgraduate residency programs. Some are currently piloting such systems.
 
It is possible that such evaluation systems may eventually be used within undergraduate MD programs. If this occurred it would provide much more information to residency programs during CaRMS. The reason for this is that such evaluation systems create a portfolio of evaluations on very specific and necessary topics. Such a portfolio would give a much more detailed clinical assessment of the applicant. It would also allow programs to tease out particular skills which are highly regarded within a particular specialty. For example, procedural skills would probably be more highly emphasized within a surgical specially. It would also allow programs to identify applicants who are performing above their current level of training, a desirable trait as such a resident is likely to be easy to train and low maintenance. 
 
Unfortunately, such a system being standard across all medical schools is far off within the undergraduate realm. 

 

 

It would be amazing to have a competency-based evaluation model in medical school, but that takes a competency-focused education model, which is not what we have now. A good portion of medical school education is focused on knowing things rather than being able to do things, particularly in pre-clerkship.

 

I hope that starts to change moving forward, where medical schools care less about knowledge (especially in a world where all that information is easily accessible online in multiple formats) and more about how to apply that knowledge in a practical setting, but it's a slow process and it'd be resource-intensive. As an aside, I'm still at a complete loss as to where all the money goes at medical schools, so for all I know such a transition could be completely feasible, but I've had trouble finding any hard numbers.

 

Written testing does at least evaluate knowledge, even if it often evaluates short-term knowledge rather than sustained information retention. If medical schools want knowledge to be the focus of their education, I think it's reasonable for that to be the focus of evaluation as well. I dislike this model - it seems to be fairly poor at encouraging long-term learning and as a result the related evaluations aren't overly helpful for assessing potential residents (part of the reason we moved to the pass/fail scheme) - but until that educational model changes, might as well work as best as we can within it.

Link to comment
Share on other sites

It would be amazing to have a competency-based evaluation model in medical school, but that takes a competency-focused education model, which is not what we have now. A good portion of medical school education is focused on knowing things rather than being able to do things, particularly in pre-clerkship.

 

I hope that starts to change moving forward, where medical schools care less about knowledge (especially in a world where all that information is easily accessible online in multiple formats) and more about how to apply that knowledge in a practical setting, but it's a slow process and it'd be resource-intensive. As an aside, I'm still at a complete loss as to where all the money goes at medical schools, so for all I know such a transition could be completely feasible, but I've had trouble finding any hard numbers.

 

I disagree. Clerkship is where you learn how to "do" and function in a clinical environment. Pre-clerkship prepares you for that, and aside from basic history taking and physical examination skills, the bulk of that is the accumulation of knowledge. It is not sufficient simply to "look things up". For example, I learned most of my anatomy at the beginning of first year. As with anything else, repeated re-exposure helps reinforce that initial learning, but you cannot go into clinical training without a certain knowledge base. 

Link to comment
Share on other sites

I disagree. Clerkship is where you learn how to "do" and function in a clinical environment. Pre-clerkship prepares you for that, and aside from basic history taking and physical examination skills, the bulk of that is the accumulation of knowledge. It is not sufficient simply to "look things up". For example, I learned most of my anatomy at the beginning of first year. As with anything else, repeated re-exposure helps reinforce that initial learning, but you cannot go into clinical training without a certain knowledge base. 

 

I completely agree that a knowledge base is necessary, but what I'd argue is that didactic learning which focuses on knowledge alone is a relatively ineffective way to develop that knowledge base. A more task-based approach that incorporates working with that knowledge would more that information more salient and easier to recall, while building in that repeated re-exposure you mention. I mention being able to look things up not because I think that it's acceptable to not develop a knowledge base in pre-clerkship, but because medical schools are no longer unique repositories of medical knowledge - they should offer more than the passive instruction that could be equally achieved by reading Wikipedia or listening to freely available online lectures.

 

At a typical four year program, even the last subjects studied are completed months before clerkship starts. It could be years between a piece of information being learned in pre-clerkship and used in clerkship. A lot of information gets forgotten in that time. That second exposure in clerkship then becomes less of a reinforcement and more of a true re-introduction, where the mental connection has to be more-or-less completely reforged. Pre-clerkship isn't preparing students for clerkship if a good portion of pre-clerkship information is lost by the time clerkship rolls around.

 

I've already forgotten tons of what I learned in the past two years, in part because I have little contextual information to draw on. I find most of what I've remembered has been primarily of relevance to those history taking and physical examination skills, because there's an obvious application and the knowledge gets repeated. The goal of pre-clerkship - namely building the knowledge base - is perfectly acceptable, but the methods by which schools attempt to achieve that goal seem flawed. What I understand of educational research supports that notion, though I'd love to see counter-examples.

Link to comment
Share on other sites

I disagree. Clerkship is where you learn how to "do" and function in a clinical environment. Pre-clerkship prepares you for that, and aside from basic history taking and physical examination skills, the bulk of that is the accumulation of knowledge. It is not sufficient simply to "look things up". For example, I learned most of my anatomy at the beginning of first year. As with anything else, repeated re-exposure helps reinforce that initial learning, but you cannot go into clinical training without a certain knowledge base.

 

There is nothing stopping such a model from testing core knowledge in this fashion as well. Much of this is built into the assessment of tasks. Usually as prompted follow up questions.

 

The gist of this model is targeted towards the evaluation of on the job training. I don't think you can do medical school without exams. But you can do a hell of a lot better than the useless clerkship evaluations which are produced today. The portfolio created from the assessment of competency based tasks replaces the present rotation evals under this theoretical system.

 

Personally as someone who has been exposed to this style of training within real clinical settings, albeit in the postgraduate world, it has been fairly fruitful. I'm of the opinion that the eventual adoption of this style of evaluation will inevitably become universal, at least within postgraduate training.

Edited by rogerroger
Link to comment
Share on other sites

  • 2 weeks later...

There is nothing stopping such a model from testing core knowledge in this fashion as well. Much of this is built into the assessment of tasks. Usually as prompted follow up questions.

 

The gist of this model is targeted towards the evaluation of on the job training. I don't think you can do medical school without exams. But you can do a hell of a lot better than the useless clerkship evaluations which are produced today. The portfolio created from the assessment of competency based tasks replaces the present rotation evals under this theoretical system.

 

Personally as someone who has been exposed to this style of training within real clinical settings, albeit in the postgraduate world, it has been fairly fruitful. I'm of the opinion that the eventual adoption of this style of evaluation will inevitably become universal, at least within postgraduate training.

 

Indeed a few select IM programs in the US already have transitioned to a competency based evaluation model. Many more and in other specialties are following suit and it's making headway to the undergraduate medical world as well. I agree that this is the next step.

 

With this said, having gone through NRMP (only) and having several friends back home who went through CaRMs, it is clear that without the objective scores and rankings the process of selecting residents becomes more random. You can still take into account publication count/quality, leadership, LORs and the like but adding in class rank (there are only 2 true P/F medical schools in the States, Yale and Harvard and yes they can get away with it comfortably) and Step scores allows for better separation.

 

Although the USMLE examinations were always intended to be for licensing purposes only (i.e. pass/fail is what matters), they have grown in measuring clinical decision making (especially Step II). Long gone are the days of regurgitation like you will find in the USMLE exams released from the early/mid 2000s. It is more along the way of recognizing the diagnosis being presented within a typically unnecessarily convoluted presentation and proceeding to either the next best step in management or the gold standard treatment or taking it another step and treating the possible complications thereof. They have done a good job with small details like allowing for interactive auscultation for murmurs and adventitious breath sounds as well and this will be built upon by the NBME.

 

Anyway, the bottom line is there is something to be gained from a standardized exam or two prior to putting in applications. Class rank will further help determine interview selection. Yes it is stressful not to be in a P/F curriculum and I completely understand the movement away from grades. At the same time, having gone through a non-P/F grading curriculum there is no letting down at any point; an effective way to combat complacency *at any point in training*. If building this knowledge base is continued in the right way in the clerkship years it translates to a superb knowledge base which can then be combined with the requisite interpersonal skill to really do right by your patients. 

 

The tradeoff to more grades/standardized examinations is that some students from this middle homogenous group (referred to earlier in this thread) will sink to the bottom of the interview/rank list pile but the advantage is that those who are exemplary are seen as such in the academic facet in addition to the research productivity and networking that the Canadian system rewards disproportionately. 

 

To all reading this thread, your mileage may vary but for me, I appreciated the opportunity do well/not and it be known appropriately in the academic facet with USMLE scores and class rank.

Link to comment
Share on other sites

  • 3 weeks later...

Indeed a few select IM programs in the US already have transitioned to a competency based evaluation model. Many more and in other specialties are following suit and it's making headway to the undergraduate medical world as well. I agree that this is the next step.

 

With this said, having gone through NRMP (only) and having several friends back home who went through CaRMs, it is clear that without the objective scores and rankings the process of selecting residents becomes more random. You can still take into account publication count/quality, leadership, LORs and the like but adding in class rank (there are only 2 true P/F medical schools in the States, Yale and Harvard and yes they can get away with it comfortably) and Step scores allows for better separation.

 

Although the USMLE examinations were always intended to be for licensing purposes only (i.e. pass/fail is what matters), they have grown in measuring clinical decision making (especially Step II). Long gone are the days of regurgitation like you will find in the USMLE exams released from the early/mid 2000s. It is more along the way of recognizing the diagnosis being presented within a typically unnecessarily convoluted presentation and proceeding to either the next best step in management or the gold standard treatment or taking it another step and treating the possible complications thereof. They have done a good job with small details like allowing for interactive auscultation for murmurs and adventitious breath sounds as well and this will be built upon by the NBME.

 

Anyway, the bottom line is there is something to be gained from a standardized exam or two prior to putting in applications. Class rank will further help determine interview selection. Yes it is stressful not to be in a P/F curriculum and I completely understand the movement away from grades. At the same time, having gone through a non-P/F grading curriculum there is no letting down at any point; an effective way to combat complacency *at any point in training*. If building this knowledge base is continued in the right way in the clerkship years it translates to a superb knowledge base which can then be combined with the requisite interpersonal skill to really do right by your patients. 

 

The tradeoff to more grades/standardized examinations is that some students from this middle homogenous group (referred to earlier in this thread) will sink to the bottom of the interview/rank list pile but the advantage is that those who are exemplary are seen as such in the academic facet in addition to the research productivity and networking that the Canadian system rewards disproportionately. 

 

To all reading this thread, your mileage may vary but for me, I appreciated the opportunity do well/not and it be known appropriately in the academic facet with USMLE scores and class

Do you know where grades really matter ? The IMG match ! Where even family medicine is competitive and most IMG's would just love to match to anything rather than just drive cabs and deliver pizzas ! Talk about double standards ! If you simply pass the exam you have 0 percent of matching to even FM. They completely ignore any medical experience you have in your home country and only evaluate 2 things: exam scores ( they want say a cardio-thoracic surgeon who has been practicing for years to study psych, ob/gyn, peds, IM again and expect him to do well in the exam)  and canadian medical experience (which is impossible to get if you have been a practicing doctor for many years). 

Link to comment
Share on other sites

If you really wanna talk about throwing darts ........ it is the IMG match for sure ......... nobody knows why anyone matched or did not match ........ you can find an applicant with high scores who matched .......... and one with high scores who did not match ........ one with a low score who matched ....... and one with a low score who did not match ............ there is absolutely no trend ..........

Link to comment
Share on other sites

  • 3 weeks later...

They seem to be reducing the number of residency spots in the highly competitive specialties. Just eyeballing the ROADP specialties all got hit, Radiology (-9), Ophthalmology (-1), Anesthesia (-2), Dermatology (-4), Plastics (-1).

 

You're right. That seems odd to me. Sure, specialties like Ophtho are a bit oversaturated, but specialties like Derm? I'd argue we need a good number more Dermatologists and certainly not fewer.

 

The flip side is that they seem to be increasing spots in less competitive specialties that could use an increase, like Family and Psych. In addition they're lowering the number of spots in fields with poor job opportunities, like Orthopedics and Neurosurgery.

Link to comment
Share on other sites

I have updated the stats for the 2015 year! File uploaded to original Dropbox Link.

Top 3 most competitive specialties for 2015: 1) Dermatology 2) Plastics 3) Emergency.  Same 3 as 2014 but different order.

Wow didn't think Emerg was that high up. I thought it would be Optho or Rads.

Link to comment
Share on other sites

The CFMS puts out these stats each year, but they take a while - here's the probability of matching into a specialty if you rank it as your first choice, along with the probability of matching into an alternative specialty if you rank the listed specialty as your first choice. Kind of an intention-to-become analysis. Overall, it lines up well with what Vanguard has provided, with some minor differences. All figures are for the 1st round only and for CMGs only.

 

Stats are available here.

 

 

Wow didn't think Emerg was that high up. I thought it would be Optho or Rads.

 

Emerg has been popular for a while now, it's virtually always in the top 4 (once you eliminate small specialties). Ophto's usually towards the top and still is to an extent this year - it was a fairly non-competitive year, but it's had some like this before. Rads hasn't been all that competitive at all in recent years, though it was a bit more competitive this year by the looks of it.

 

Craziest result for me? This year an applicant's chances of getting Neurosurgery if they wanted it were virtually identical to an applicant's chances of getting Psych - and way better than an applicant's chances of getting Neurology. Insanity I say!

 

Edit - Wow, embedding the table directly in the thread did not work at all... threw in a link to the stats instead.

Link to comment
Share on other sites

The CFMS puts out these stats each year, but they take a while - here's the probability of matching into a specialty if you rank it as your first choice, along with the probability of matching into an alternative specialty if you rank the listed specialty as your first choice. Kind of an intention-to-become analysis. Overall, it lines up well with what Vanguard has provided, with some minor differences. All figures are for the 1st round only and for CMGs only.

 

Stats are available here.

 

 

 

Emerg has been popular for a while now, it's virtually always in the top 4 (once you eliminate small specialties). Ophto's usually towards the top and still is to an extent this year - it was a fairly non-competitive year, but it's had some like this before. Rads hasn't been all that competitive at all in recent years, though it was a bit more competitive this year by the looks of it.

 

Craziest result for me? This year an applicant's chances of getting Neurosurgery if they wanted it were virtually identical to an applicant's chances of getting Psych - and way better than an applicant's chances of getting Neurology. Insanity I say!

 

Edit - Wow, embedding the table directly in the thread did not work at all... threw in a link to the stats instead.

I'm curious as to how the chances of matching an alternative discipline is determined. Wouldn't that depend on what alternative discipline you choose? Also, why are there such huge differences between specialties with respect to your chances of matching to an alternative discipline? Is it because your first choice is too specialized? For example: ophto looks terrible but derm looks much better with respect to chances of matching to alt. discipline. 

 

Why do you guys feel Emerg is so competitive? This is news to me. 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...