Jump to content
Premed 101 Forums

Mcmaster Medical School "chance Me": A Statistical Approach


Recommended Posts

This is my attempt to base a "chance me" thread based on data.

 

What information do I have:

 

1) The statistics of admitted applicants (http://fhs.mcmaster.ca/mdprog/documents/Classof2017.pdf)

2) The knowledge that z-scores are used by McMaster's MD Admissions Office

 

What is a z-score? A z-score is a 'statistic' that describes your position relative to other peoples' on a distribution curve. 

For instance, a z-score of +1.64 means that you are in the top 5% of the data set, whereas a z-score of -1.64 means that you are in the bottom 5% of the data set. 

 

So? McMaster calculates a z-score for your performance on CASper, your MCAT Verbal Reasoning score and your Undergraduate GPA. These z-scores are then translated as an overall score on the 32% for each of these components. This means that the 32%-score that you receive for, say, your MCAT VR is dependant on how you fared relative to other people who applied in your cohort. 

 

What do you need to calculate a z-score? You need the mean of the data set and the standard deviation (SD), which is a description of how "dispersed" the data is. A relatively low SD reflects a narrow distribution whereas a high SD reflects a wider distribution. Using this information, a z-statistic can be calculated as follows:

 

Z= ("Your score" - "Mean value of that score")/"SD of that score"

 

Where is the standard deviation? Most medical schools don't make this data available and simply disclose a "mean" to the public. McMaster, however, happens to give us a distribution of the data. From this, we can estimate the Standard Deviation of the Verbal Reasoning and Undergraduate GPA components (from the link above). CASper information is unfortunately not available.

 

I inputted the data on excel and have calculated the standard deviations. I have attached the file for your review. For the verbal reasoning component, the SD is precise. For the undergraduate GPA component, I estimated the calculation using the midranges of each of the categories that are published. For instance, for the category of 3.90 to 4.00, I took a midpoint value of 3.95. The calculation seems to have worked decently, because using this method I obtained the officially published mean (3.83).

Link to comment
Share on other sites

 

Here are the numbers:

 

Verbal Reasoning Data: Mean 10.99, SD 1.07

Undergraduate GPA Data: Mean 3.83, SD 0.146

 

Applying the calculations to assess competitiveness: Let's use the formula above and look at some cases.

 

Case 1: GPA of 3.83, VR of 11

VR Z score + GPA Z score = 0.

 

This is a completely average interviewed applicant who would need an average CASper score (among those who will be interviewed for that cycle) to gain an interview. This does not mean a score on CASper in the upper half of all applicants, but a score on CASper that is average among those who will gain an interview (which at the end of the day means the average CASper score of the 550 lucky individuals who were lucky to gain acceptance). Be careful with the interpretation. 

 

Case 2: GPA of 3.83, VR of 10

 

VR Z score + GPA Z score = -0.93 + 0 = -.093

 

A somewhat negative score would require an above average CASper score for interview.

 

Case 3: GPA of 3.95, VR of 10 

VR Z score + GPA z score = -0.925 + 0.821 = - 0.103 

 

A slightly negative z-sum would imply the need for a slightly above average CASper to gain interview.

 

However, since the calculation underestimates the GPA z-score for this individual, this person actually has closer to a slightly positive score and an ABOVE-average chance. ** See limitation one at the bottom of the next post for an explanation of why. 

 

Link to comment
Share on other sites

What does the Z-score mean in terms of chances? This part is hard to flesh out. I know some people on this forum have asked how much CASper can possibly compensate for you. Let's look at cases of interviewed/rejected applicants.

 

Case 4: GPA of 3.53, VR of 11

VR Z score + GPA Z score = 0 + -2.05 = -2.05

 

This applicant was interviewed. Obviously, the CASper must have done him/her very well to make up for such a negative score. 

 

Case 5: GPA of 3.98, VR of 13 

VR Z score + GPA Z score = 1.87 + 1.03 = +2.90 

 

This applicant was rejected. Their CASper must have been quite poor to have negated a positivity of 2.90. 

 

Conclusion: The conclusion is that CASper can compensate/destroy quite a bit and that is what the application will really come to. The Z-score sum will give you an indication of where you stand pre-exam.

 

Limitations:

 

I) I assume that the data is normally distributed. This is true for the VR data, which is why the Z-score statistic is ideal for verbal. It is less perfect for GPA, especially at high GPAs, since there is an upward skew in the GPA data. Thus, individuals with high GPAs might have an adjusted Z-statistic calculated for them to better reflect their percentile. For individuals with GPA of 3.9+, their Z-score sum is somewhat underestimated. 

 

II) The statistics come from the GPA/VR of accepted applicants, not interviewed applicants. The scores might be more forgiving for an interview since GPA/VR is "selected for" twice in the process: pre and post interview. Further, if GPA/VR is correlated with success on the MMI, then the interview statistics might be even more forgiving, however there is no evidence for the latter claim.

 

Hope this was interesting and I am looking forward for feedback/questions,

Link to comment
Share on other sites

Thanks for the analysis; I'm still a bit confused as to how to interpret the Z-score though. A Z-score of 0 means that your stats are better than 50% of the accepted class last year (e.g., 3.83 GPA, 11 VR, exactly average CASPer). Assuming the applicant pool this year is exactly the same (which is not a valid assumption), this would mean you are guaranteed an interview, correct? Now, in all likelihood the average GPA and VR is slightly higher this year, so this same person would be at, maybe, -0.2. This would still likely be a guaranteed interview. How many SDs below the mean do you think you can safely be (after CASPer is accounted for) to be "guaranteed" an interview?

Link to comment
Share on other sites

Thanks for the analysis; I'm still a bit confused as to how to interpret the Z-score though. A Z-score of 0 means that your stats are better than 50% of the accepted class last year (e.g., 3.83 GPA, 11 VR, exactly average CASPer). Assuming the applicant pool this year is exactly the same (which is not a valid assumption), this would mean you are guaranteed an interview, correct? Now, in all likelihood the average GPA and VR is slightly higher this year, so this same person would be at, maybe, -0.2. This would still likely be a guaranteed interview. How many SDs below the mean do you think you can safely be (after CASPer is accounted for) to be "guaranteed" an interview?

 

It's hard to say. Cases 4 and 5 demonstrate that really nothing is guaranteed. Your CASper can make a tremendous difference in your application. A GPA of 3.83 and VR of 11 means that your chances for invitation are about average going into the CASper exam. Your performance on CASper makes the final determination. 

For those not fitting those statistics who wish to gauge their chances going into CASper, the Z-score sum can be used to give you an idea. However, again, even individuals with highly positive stats (i.e. the GPA of 3.98 and 13 VR) could be rejected. This is just a way to gauge if you need an above or below average CASper, really.

Link to comment
Share on other sites

Good analysis.

 

The only other limitation I would point out is that these are the accepted applicant statistics you are working with, not the interviewed applicant pool. The interviewed applicant pool may have slightly lower metrics overall, so the Z-scores should be a bit more forgiving if you are using these formulas to gauge your chance at an interview.

Link to comment
Share on other sites

Good analysis.

 

The only other limitation I would point out is that these are the accepted applicant statistics you are working with, not the interviewed applicant pool. The interviewed applicant pool may have slightly lower metrics overall, so the Z-scores should be a bit more forgiving if you are using these formulas to gauge your chance at an interview.

 

Good point. GPA and VR may be slightly over-estimated because they are 'selected for' both pre and post interview. They would also be more forgiving pre-interview if VR/GPA was correlated with success on the MMI (not sure if this is true or not) but yes - the statistics aren't perfect for sure. 

Link to comment
Share on other sites

It's hard to say. Cases 4 and 5 demonstrate that really nothing is guaranteed. Your CASper can make a tremendous difference in your application. A GPA of 3.83 and VR of 11 means that your chances for invitation are about average going into the CASper exam. Your performance on CASper makes the final determination. 

For those not fitting those statistics who wish to gauge their chances going into CASper, the Z-score sum can be used to give you an idea. However, again, even individuals with highly positive stats (i.e. the GPA of 3.98 and 13 VR) could be rejected. This is just a way to gauge if you need an above or below average CASper, really.

 

Right, but I'm wondering if it's possible to estimate a kind of "cut off" Z-score for an interview. For example, although I know that perceived performance on CASPer isn't correlated very strongly with actual performance, if someone felt really good about it, is naturally a good writer/arguer and gave plenty of long, well reasoned answers on CASPer they can probably assume they scored at least in the top half, meaning their Z-score including CASPer would be the same as their pre-CASPer score or slightly better. So I'm wondering how negative the post-CASPer score can be to have a reasonable shot. (Again, I do realize there's a ton of uncertainty still inherent in this process but it seems possible to at least estimate.)

Link to comment
Share on other sites

Put another way, a post-CASPer score of 0 would put you in the middle of the accepted applicants. There were 206 accepted out of 4973 applicants, and I'm assuming that those who were accepted are pretty much at the top of those 4973. So even with a z-score that would put you at the bottom 5% of the accepted applicants, you are still likely much higher than the average applicant as a whole, and since the interview size is more forgiving than the acceptance size, bottom 5% of (last year's) accepted applicants post-CASPer might still mean an interview. Is there a problem with that reasoning?

Link to comment
Share on other sites

Right, but I'm wondering if it's possible to estimate a kind of "cut off" Z-score for an interview. For example, although I know that perceived performance on CASPer isn't correlated very strongly with actual performance, if someone felt really good about it, is naturally a good writer/arguer and gave plenty of long, well reasoned answers on CASPer they can probably assume they scored at least in the top half, meaning their Z-score including CASPer would be the same as their pre-CASPer score or slightly better. So I'm wondering how negative the post-CASPer score can be to have a reasonable shot. (Again, I do realize there's a ton of uncertainty still inherent in this process but it seems possible to at least estimate.)

 

I would say about -3 is the cutoff beyond which is it really moot. I would estimate that negative of a score would require a CASper in at least the top 5%. 

Link to comment
Share on other sites

Put another way, a post-CASPer score of 0 would put you in the middle of the accepted applicants. There were 206 accepted out of 4973 applicants, and I'm assuming that those who were accepted are pretty much at the top of those 4973. So even with a z-score that would put you at the bottom 5% of the accepted applicants, you are still likely much higher than the average applicant as a whole, and since the interview size is more forgiving than the acceptance size, bottom 5% of (last year's) accepted applicants post-CASPer might still mean an interview. Is there a problem with that reasoning?

 

The z-score SUM takes into account performance on all aspects. Under this model, there would be a cutoff for the Z-score sum of accepted applicants. Someone may be at the bottom 5% of say VR (i.e. a 9 VR), but they would need to be exceptional in other aspects (I.e. GPA and CASper) to account for that VR. Overall, their Z-score sum would be relatively high out of the total 4973 applicants who applied for an interview, even if they may be the 550th interviewed applicant. 

Link to comment
Share on other sites

What makes your question particularly difficult to answer is that it largely depends on statistics of the entire applicant pool to properly answer quantitatively... We only really know about the accepted population.

 

Best we can really determine is that CASPer is significant enough to make our break you... If we had the mean and SSD of the entire applicant pool we could figure out exactly what you're score needs to be too get an interview.

 

The one criticism I have of the analysis is when you summed the Z-scores... While that gets the point across, you should technically average them (Ie. Your z_VR and z_GPA and z_CASPer are all equally weighted according to Mac... So if you had z-scores of 0,0 and 1 respectively, your applicant z_score would be 0.33 - not 1.)

 

Edit - the first bit was in response to positive tension, re: how much can CASPer compensate.

Link to comment
Share on other sites

Ha! Looks like we have a convert to quantifying McMaster interview chances. Glad to see your numbers line up so closely with the rougher ones I posted last week.

 

So, I obviously agree with the final formula - it'd be pretty hypocritical of me to say I didn't when I was promoting almost the same analysis - I will make a slight quibble.

 

It's true that Mac uses Z-score analysis. What's not true is that the mean values and SD they use in calculating those Z-scores can be divined from their pool of admitted applicants. For starters, they don't know who they're eventually going to accept, or even interview, when they calculate those Z-scores. Admitted applicants are also distinct from interviewed candidates: it misses individuals who rejected Mac's offer for another school (applicants who have multiple acceptances often have good stats), or who never received an offer post-interview (who should have slightly lower-than-average stats since they're still part of Mac's post-interview formula).

 

I would suspect that they calculate Z-scores based on all applicants and that a positive combined Z-score from all three aspects of their pre-interview formula - cGPA, MCAT VR, and CASPer - is necessary for an interview invite. It would better explain the CASPer's seemingly large potential to sway results. Your fifth case demonstrates this nicely: the chances of an individual having a Z-score on the CASPer of less than -2.90 is 0.21%. Possible, but extraordinarily unlikely. But, let's say they used a mean cGPA of 3.6 (SD 0.20) and a VR of 10 (SD 1.5) when calculating their Z-scores, but they needed a total Z-score of 2.5 (note: all those numbers are completely made up just to illustrate my point - they're meant to be plausible, not definitive). Case #5 would therefore have a Z-score of about 3.9 going in and would only need a reasonably below-average CASPer, say a Z-score of -1.5, to get rejected.

 

Basically, the end result holds up to scrutiny for most individuals, but the methodology is flawed. You're trying to recreate Mac's methodology using incomplete information about it.

 

How is this different from what I posted last week? I made no assumptions about their process, I just fit the end results to a simple linear model and checked for inconsistencies with the data. The fact that the two line up so closely is likely more coincidental than anything.

Link to comment
Share on other sites

What makes your question particularly difficult to answer is that it largely depends on statistics of the entire applicant pool to properly answer quantitatively... We only really know about the accepted population.

 

Best we can really determine is that CASPer is significant enough to make our break you... If we had the mean and SSD of the entire applicant pool we could figure out exactly what you're score needs to be too get an interview.

 

The one criticism I have of the analysis is when you summed the Z-scores... While that gets the point across, you should technically average them (Ie. Your z_VR and z_GPA and z_CASPer are all equally weighted according to Mac... So if you had z-scores of 0,0 and 1 respectively, your applicant z_score would be 0.33 - not 1.)

 

Edit - the first bit was in response to positive tension, re: how much can CASPer compensate.

 

Agreed! I updated my limitations section to reflect that the population I am using for this analysis is not necessarily the population which is being assessed for interview. This is only an estimate indeed. 

 

As far averaging, I agree - but I think that summing communicates the information that is most relevant more quickly.

 

Thanks for the feedback. 

Link to comment
Share on other sites

 

 

How is this different from what I posted last week? I made no assumptions about their process, I just fit the end results to a simple linear model and checked for inconsistencies with the data. The fact that the two line up so closely is likely more coincidental than anything.

 

My goal was to re-create as transparent and rigorous a model as I could with what was there, stating the limitations and giving applicants a tool (justified by reasoned analysis) to inform their decision to apply to McMaster's MD program. I did not want to use intuition to guide what numbers I proposed to applicants, or propose a linear model when it is more accurately described by percentiles.  

Link to comment
Share on other sites

My goal was to re-create as transparent and rigorous a model as I could with what was there, stating the limitations and giving applicants a tool (justified by reasoned analysis) to inform their decision to apply to McMaster's MD program. I did not want to use intuition to guide what numbers I proposed to applicants, or propose a linear model when it is more accurately described by percentiles.  

 

What you've used isn't rigor, just the illusion of it.

 

Using Z-scores would be more accurate than the admittedly naive linear model I used if you knew the way the Z-scores were calculated, from what population, or how they were used - but you don't. I know what I didn't know and didn't make assumptions about those unknowns. You've made multiple assumptions about the unknowns without any justification.

 

And for all your claims of increased accuracy with percentiles, your end result really only duplicates what I posted a week ago. Our models' predictive formulas are essentially equivalent.

Link to comment
Share on other sites

 

 

Using Z-scores would be more accurate than the admittedly naive linear model I used if you knew the way the Z-scores were calculated, from what population, or how they were used - but you don't. I know what I didn't know and didn't make assumptions about those unknowns. You've made multiple assumptions about the unknowns without any justification.

 

 

...

Link to comment
Share on other sites

 

 

 

 

 

Assumptions are often a component of analyses. Unfortunately, rarely do we have access to the full data that we want to conduct a perfect analysis. The presence of an assumption does not automatically mean, however, that the analysis is worthless.

 

In this model, I assume that the accepted population will have mean a VR/GPA that is similar to that of the interviewed population. Let's assess the assumption: the GPA/VR are "selected for" both before and after the invitations to interview take place, meaning that the accepted statistics might be slightly higher than the interviewed statistics. However, VR/GPA only comprises 30% of the post-interview formula. The bulk of the result is determined by the MMI result. GPA/VR do not correlate strongly (or at all) with MMI performance - there is no evidence of this that I am aware of. In fact, the MMI is used as basis to assess applicants on components OTHER than their academic strength. Given that this component will primarily determine admission status, the GPA/VR means might be slightly higher than the interviewed pool, but not substantially. The model isn't "broken" on the question on what the means are. It provides valid estimates.

 

You could also question the validity of the SDs that I calculated. But we know that the SD of a sample of a population (assuming it is N>30 as per the central limit theorem, which this is) is approximate to the SD of a population. My process of calculating the SD holds. So the model isn't "broken" here either.

 

Another critical question is the fact I am using interviewed applicant data to assess competitiveness, not general applicant data. I argue that both sets of data are capable of answering the same question: the difference would be that using the "general applicant" data, the Z-score "threshold" for competitiveness is not 0, but whatever tends to make a successful combination of GPA/VR/Casper based on the amalgamated results. Yet with this model, we can still apply the means of successful applicants, setting a known Z-score threshold of "0" for competitive and arrive at similar answers. 

 

 

 

 

I agree with your first three paragraphs, but my objections never rested on them. Your model would be valid, if imprecise, if those were the only issues.

 

Your fourth paragraph, however, is where it falls apart. With a large enough sample size, you can infer means and standard deviation of the general population, if the sample is representative. Accepted applicants are clearly not a representative sample of the overall applicant pool. They weren't chosen at random. Heck, the entire use of Z-scores by McMaster is designed to separate out the high-performing applicants.

 

This gets back to a major problem with your method: the Z-scores McMaster used to select invited applicants, and the means and standard deviations used in their calculation, had to exist before the invited applicants were selected. McMaster had to use different Z-scores than the ones you've calculated because the data used to calculate your's didn't exist when McMaster selected their invitees. The only thing they have in common is that they're both Z-scores.

 

Assumptions are fine if they're valid, but you've made at least one assumption that one evidently false: that the admitted applicant pool is representative of the total applicant pool. Since McMaster only had the total applicant pool when creating their pre-interview Z-scores, and the data you used is non-representative of that total applicant pool, the Z-scores you calculated cannot be said to be representative of anything McMaster used.

 

 

Well, it could be suggested that yours was accurate out of co-incidence. You didn't really have much backing your numbers up. You yourself called it naive. 

 

You misunderstand what I mean when I say "naive" - I meant unassuming, not without evidence backing it up. A linear regression model is a naive analysis of data - it makes few assumptions about the underlying process producing that data - but it's still a very useful method of analysis.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...