Jump to content
Premed 101 Forums


  • Content Count

  • Joined

  • Last visited

About 098123

  • Rank

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Even better! Wow.. a fail rate of 27% is absolutely unheard of for an exam like this. Not sure how this can continue and be sustainable..
  2. I'm certainly no expert in stats.. but my interpretation of the results seems to indicate that roughly 20% of test-takers are now failing based on this new scoring scale... Anyone with more knowledge on statistics, please feel free to correct!
  3. If you're willing to contribute another $250 to the MCC (in addition to the $2500 it costs to re-write), then they will do a score "re-check", but it's clearly stated on the MCC website that this doesn't include a "re-evaluation". So essentially they check to make sure scores were entered into the computer, to the correct profile, and make sure there weren't any ink errors/smudges or tears in scantron sheets. This current system assumes that whatever the examiner marked on the bubble sheet is 100% accurate and objective, and it becomes 100% concrete. Since none of the stations are recorded with video or audio (unlike other high stakes/high cost certification exams), there's no option for "re-evaluation" and essentially nothing to hold examiners accountable for what gets marked on the bubble sheets. Beyond this.. without an option for re-evaluation, anyone who fails is essentially kept in the dark with respect to how they can improve or make adjustments going forward. One of the most difficult things to accept about this exam, in addition to everything that's been mentioned above, is the de-humanized "support" that is provided when reaching out to the MCC for guidance. I've heard of two separate candidates now who've failed by 1 point. One of these individuals called the MCC to explore options for re-evaluation or further feedback. They described the conversation as robotic, as if the individual was reading off a script, and that any further action would cost more money (big surprise). Getting back to the big picture.. many parts of the country are desperate for family doctors and several thousands of people don't have a primary care physician. In my case, I'm now fully certified by the CCFP based on completion of residency and CCFP exams, however the MCC will delay my ability to practice for 7+ months, factoring in the time it takes to get results. Even if I had failed by just one point as others have, my situation would be exactly the same. One point on a 2 hour exam would overshadow 10+ years of post-secondary training. Somehow an exam that used to screen for minimal competency has become an exam with a fail rate in the neighbourhood of 20%. An exam that used to pose very little challenge for residents in generalized fields of medicine is now seeing more and more family medicine residents fail. Something isn't right...
  4. On the topic of possible site/examiner variation.. as per the MCC website, "objectivity of scoring is achieved using standardized guidelines for exam administration, the training of examiners and of SPs, and the use of predetermined scoring instruments for OSCE stations." So the claim is that objectivity is achieved? The sample rubrics posted on the MCC website show a standardized checklist for examiners, as well as an "interaction rating scale" where they're asked to score our interviewing skills or physical exam skills as "inadequate, marginal, adequate, or superior". https://mcc.ca/examinations/mccqe-part-ii/clinical-stations/case-1-14-minute-station-ken-scott/ This interaction rating scale inherently carries with it a great deal of subjectivity. Further to that, the "standardized checklists" afford the examiner the option of selecting "attempted only" or "completed satisfactorily" for each bullet point. I'm not sure how you can "attempt to ask about fever or weight loss" .. but not satisfactorily ask about fever or weight loss? This would only make sense for physical exam techniques. Another point worth mentioning is that the examiner checklists assume that the examiner is listening attentively for the entirety of the 14 minute station. Then for the 7 more stations they have to sit through, totalling almost 120 minutes of attentive listening. If a question is asked, but possibly missed by the examiner.. there is no way to know. High stakes exams like the CCFP SOO stations are recorded on video so that exam results can be re-evaluated when significant discrepancies occur. There is no option for re-evaluation of scores on the LMCC2. You get a pass/fail score, and then a generalized breakdown of your performance with no tailored feedback or insight into specific adjustments that can be made. If you want your score re-checked, it's another $250 yet they say on the website that a re-check does not include a re-evaluation. So essentially whatever the examiner marks on the checklist is set in concrete and a re-check just makes sure that scores were entered into the computer properly. Perfect!
  5. Hi all, Family med resident here, I received news yesterday that I passed the CCFP exams from this past April. I then received news today that I failed the LMCC2 exam written in May. More concerning however is that it was my second attempt at the LMCC2 exam (unable to pass in October 2018 as well). Furthermore, my score somehow dropped lower than the first attempt despite a modified approach, and an overall feeling that my performance was significantly improved the second time around. I know this isn't the first "I can't believe I failed the LMCC" thread, but in light of recent changes to the LMCC format/assessment which were implemented in October 2018.. " the MCC made substantial revisions to the MCCQE II in 2018 with the implementation of a new examination blueprint".. I was wondering if anybody else is struggling to make sense of their results from this exam? The scoring scale previously ranged from 50 to 950 with a mean of 500 and a standard deviation of 100. Since October 2018, scores range from 50 to 250 with a mean of 150 and standard deviation of 20. I realize that these are equivalent/proportional but now that the pass score is set at 138, doesn't that indicate that ~16% (one standard deviation from mean 150-20=130) of test-takers scored below 130? Beyond that, anyone who scored 131-137 would fail as well.. which would bring to the total fail rate to almost 20%? If this logic is correct, isn't the fail rate much higher compared to previous iterations of the exam before October 2018? Regardless, I'm now even more unclear as to what I'm doing wrong than I was after the first attempt. The "supplementary information" document that follows the pass/fail notification doesn't get sent for another two weeks, however it's very generalized and provides no specific or detailed feedback. I simply don't understand. Two years of a family med residency without any concerns from preceptors, successful completion of the CCFP exams. I even reviewed each of the stations with a colleague who passed the exam comfortably and we can't identify any significant differences that would account for such profoundly different exam scores. Any comments or insight would be greatly appreciated.
  • Create New...