Jump to content
Premed 101 Forums

Quality of Submissions


Guest macdaddyeh

Recommended Posts

Guest ploughboy

-----BEGIN PGP SIGNED MESSAGE-----

Hash: SHA1

 

Hi all,

 

Thanks for the insight into what's happening with our essays and the whole Mac

process. Random babbling follows...

 

In one of the threads a while back, the possibility of splitting up the batches

was discussed. The idea was to have each submission evaluated by three people,

but each evaluator would read a different group of 30 essays. The thread was a

result of comments about the apparent subjectivity of Mac's evaluation

(somebody posted something to the effect of "my essays were 80th percentile

last year and only 20th percentile this year, therefore you guys suck!") The

discussion happened long before my time, but when I read it in the archives it

got me wondering about how to measure the effect of assessor subjectivity on

essay marks.

 

Carolyn and gucio93: You seemed to be involved in evaluation and collation up

to your elbows. Do you know if Adcom has ever considered the idea of seeding

the essay pool with some "control" submissions? Here's what I mean: Take some

essays that were rated 7/7 when they were first submitted, throw in some 1/7s

and some from the middle. I'm not a stats guy, but off the top of my head I'd

expect that "some" would be a relatively small number, probably less than 100.

Reintroduce these same essays into the essay pool year after year and track how

they are rated.

 

There might be copyright issues with this (I'm not a lawyer either, but I

suppose the original applicants hold copyright on their essays and you'd have to

secure their permission to use their work). You'd have to check the essays for

anacronisms ("I'm looking forward to performing at this year's Superbowl with my

idol, rap superstar Vanilla Ice"), and if the 15 questions changed you'd be

euchred. Other than that I can't think of too many technical reasons why this

couldn't be done.

 

Whether it's worth doing is another question entirely. My gut feeling is that

the 7's would consistently be highly rated, the 1's would always be low and the

stuff in the middle would bounce around a bit. In fact, there is probably

already a bunch of similar experiments in psych literature, not necessarily

involving med school applications.

 

Hmm...stats and psychology. Mayflower1, are you reading this?

 

Anybody have any comments on this? It's not like I'm going to tell Mac how to

run their admissions, just an idea I thought I'd spit out.

 

pb

 

 

PS No, I didn't write such run-on sentences in my Mac essays...

 

-----BEGIN PGP SIGNATURE-----

Version: GnuPG v1.0.6 (OpenBSD)

Comment: For info see http://www.gnupg.org

 

iD8DBQE97l1I/HNgbK3bC2wRAq2AAJkBGmSoRUBAtd+rHcFuxjsqsbh2iQCghJpa

xCIov2ct857F9w6Nmyi+j2o=

=nRuv

-----END PGP SIGNATURE-----

Link to comment
Share on other sites

Guest Kirsteen

Hi there ploughboy,

 

That's an interesting idea and if I was running an admissions process, I'd be interested in considering some of those ideas to gauge just how well my process works. Easier said than achieved in times of restricted university budgets and personnel shortages, however!

 

In simple ways that are known, e.g., comparing scores among assessors (a primitive kappa analysis) McMaster seems to be trying to control their essay assessment process. One major issue with your "seeding" suggestion is what McMaster would do if they discovered that the essay scoring was completely inconsistent. Do they stop the study in process and completely revise their approach? If so, that could translate into a large amount of irretrievable time and effort.

 

At the moment, all things considered, McMaster may be doing a decent job of the essay evaluations given the circumstances within which they must work, i.e., a relatively tiny time frame and thousands of applications to review. They have groups of three individuals scoring the essays, and if the scores deviate largely then an additional assessment is performed. Each assessor seems not to be instructed to score their batch of 30 essays relative to one another, i.e., no prescribed distribution needs to be adhered to, therefore a set of thirty truly outstanding essays will potentially remain outstanding relative to any other essay within any other batch of thirty evaluated by any other assessor. One of the main problems with the McMaster system--which is inherent in most medical school selection systems--is the subjectivity involved in marking these essays. (Shocker!) This cannot easily be avoided unless perhaps, the school applies a list of tangibles to be potentially identified within each essay (e.g., does the essay mention volunteer work; has the person traveled to other countries, etc.) and provides marks for each. However in that case, there seems to be little point in requesting an essay, which provides potential for creativity, originality, humour--factors that tend not to be as highly stressed in the request for a point-form list.

 

I agree with you though--with respect to subjectivity, the terrible essays will most likely remain terrible, the stars will continue to shine and it's the wee buggers in the middle that will tumble around the washing machine and potentially end up all over the place, score-wise. In total, if you'd like that essay to land out on top of the score heap at Mac, somehow, and in writing, you've got to enter with a product akin to the Gisele Bundchens of the essay world: highly alluring to most and quite unique among thousands of others.

 

Cheers

Kirsteen

Link to comment
Share on other sites

Guest MDWannabe

Hey Kirsteen and pb:

 

I don't know about Gisele, but I think Kirsteen hit the nail on the head! I also think pb's suggestion re having a control group is an interesting thought. I do agree that the 7's will almost always be 7's, and the 1's will probably always be 1's. To those who who claim to have hit the 80th percentile on the first go round and the 50th on the next, there are some plausible explanations beyond potential goof-ups. Two examples:

 

1. A really "out there" set of essays that had unique appeal for some, but not for others (and by the more extreme nature of the product, subject to more variant reviews).

 

2. Providing the exact same answers from one year to the next, without new experiences over the ensuing year can severely depreciate the overall feeling of the application - potentially leave the reader with a feeling of stagnation.

 

In any case, I'm sure you know that Mac is always reviewing and tinkering with the application process. Last year, some of us finished our interviews (the ones that counted) and continued on through a second set of interviews as part of a study to test out a different interviewing technique called the MMI (multiple mini interview). The format was more in keeping with the evaluation processes we apparently experience at the end of first year. For us, it was a series of 8 minute fact situations or interviews. We would read the situation at the door, have 2 minutes to think about it, and walk in to deal with the situation or question posed. After 8 minutes, we went out and on to the next door to begin the process again. Sounds a bit tense but it was actually a lot of fun. Check out Grand Rounds of Ontario Medical Education Network (OMEN) website (sorry, I don't have the URL) from last June for an explanation if you're interested, but you'll get a sense of the thinking that goes into this process.

Link to comment
Share on other sites

Guest gucio93

Thanks everyone for the vote of confidence!

I think all the questions posed to me have been answered thus far (as I was sleeping off the night shift ... and then sleeping some more to try and fight this lovely cold I have acquired in ER ;) . If there's any more questions, feel free to ask, I'm usually checking the board once a day.

Link to comment
Share on other sites

Guest macdaddyeh

Hey gucio93 (and/or any other helpful, currently enrolled Mac students), I have some more questions:

 

1. How long does each assessor have to examine each candidates submission?

 

2. I would like to know how the timing works; that is to say, that if you are looking at submissions now and we don't receive a response until March as to whether or not we are granted an interview, what happens in between in terms of TIMING? (You've already illuminated very well as to WHAT happens; I would simply like to know, if you can tell us, WHEN these procedures happen?)

 

3. I don't know what year you are in, but are fellow assessors really feeling the increased administrative workload this year with so many more new applicants?

 

4. Are 7's given out a lot as a submission mark. That is to say, I know there is a lot of freedom and discretion, but is there a lot of care taken in not "inflating" grades. In other words, in training do you learn NOT to give out 7s except in very exceptional candidates?

 

Pardon the innundation of questions. Maybe others had these same questions....Now that the semester has settled down, mind you I'm still in finals, my mind is on Med School again (and likely will continue to be). You've been very helpful. Thanks for your time

Link to comment
Share on other sites

Guest gucio93

1. Each assessor gets approximately 30 submissions. They would ideally like them back before Christmas, but some assessors take the Christmas break to work on their evaluations. The absolute deadline is Jan. 2-3 (whenever the University opens after holiday break).

 

2. There are 3,800 (or so) applicants. Each team made up of three assessors has 30 applications to evaluate. This makes approximately 125 teams of three assessors each, or 375 individual assessors who can return evaluations between now and Jan. 2-3. As you can imagine, there will be inherent variability in the timing of returned evaluations. Assuming that Jan. 2 is the deadline, worst case scenario is that all 3,800 evaluation scores and comments will have to be entered into the computer after that date (of course that likely does not happen and there are some good souls who return their evaluations earlier ;) . However, as you may imagine, it takes a bit of time for the few people in admissions to go through all the data and enter it into the computer, at which point the majority of the top 400 are chosen for the interview. I say the majority, because there are exceptional circumstances in which a person may be chosen on the basis of life experiences only, even though their GPA may place them lower down in the applicant pool. This exceptional (and I stress exceptional) cases are reviewed again by the people in charge of admission, and based on the comments made by the interview team, may also be invited to the interview. The interview letters are then prepared and mailed. Very likely the couple of months (between Jan and March) that seem like an eternity to medical school applicants, just fly by as far as the admission staff are concerned.

 

Once the interviews are completed, the 400 applicants who are assessed by 3 interviewers each give 1200 evaluations, which once again have to be entered into the computer by the admission staff. Then each member of the collation committee is given 10 files and is responsible for perusing them and making a presentation of that person to the collation committee. Over the period of time (I believe it is 1 week) that the collation committee meets, the files are presented by the individual responsible for them, discussed by the committee members (while they peruse the files various contents) and finally votes are cast to place the candidate into the accepted, rejected, or maybe group. In order to gain outright acceptance, virtually everyone on the committee must feel that this applicant fits Mac perfectly and is an outstanding (there's that word again) individual. So, only a moderate number of individuals are accepted outright. Once the collation process is completed, the decisions are solidified, verified, letters are prepared and sent out in late May. As you can see it is quite a labour intensive process.

 

3. I'm in year 2. I don't think it's so much that the fellow assesors feel the pressure (I had 30 applications last year as well), as it is that there must be substantially more teams of assessors due to the increased number of applicants. It is precisely because going through each submission thoroughly, and several times takes such a long time, that it is difficult to find individuals to volunteer time from their busy schedules to do it. I believe I spent a good 2-3 hours in total on each application, which adds up to a good 80-90 hrs, which is around 3.5 days - that is an incredible amount of time!

 

4. Like Carolyn mentioned, it is stressed at the interview that a "normal distribution" is not necessary in the group of 30. So, if an assessor feels that there are 15 outstanding candidates, they would not be faulted for giving 15 "7s" (providing when the evaluations were looked over the three assessors' scores were reasonably similar). However, from talking to colleagues it seems like many of us had similar experiences, in that usually 1-2 applications were absoultely outstanding, a couple were atrocious, and many were somewhere in between. For whatever reason, in many cases it seemed to work out that way, but I certainly did not talk to everyone in my class, or in the assessor pool for that matter, so this is a very one sided opinion.

Link to comment
Share on other sites

Hi macdaddyeh,

 

I applied to Mac, along with Toronto, Queen's & Western. .. I'm not sure what my first choice would be, mostly I want to get in anywhere :)

 

You?

 

peachy

Link to comment
Share on other sites

Guest jmh2005

Hi all!

 

I too, am a first-year student at Mac, and I totally agree with my colleages, you need to focus on what you are doing now without thinking too much about what is going on with your application. Between now and January yes, your application will be read (which we all take very seriously) by a student, community member and a faculty member and the computers take care of the rest.

 

Although I cannot comment on what I have read, I just wanted to assure everyone out there that this is not just a random process and the time we all put into reading these is HUGE! I spent about an hour on each, totalling about 30 hours (which isn't easy when you are trying to learn renal physiology yet again!). I do think that 1st year students now have an idea of what it takes to be sucessful in Medicine and Medicine specifically at McMaster, even if we've only been here for 3.5 months!

 

So, good luck on exams for those still in school and have a great holiday season. March 1st will be here before you know it, so don't stress (I know that's easier said than done!)!!

 

J

Link to comment
Share on other sites

Guest ploughboy

-----BEGIN PGP SIGNED MESSAGE-----

Hash: SHA1

 

Hi Kirsteen,

 

Your comment about Gisele inspired me to go over to the website of an ongoing

experiment in subjective evaluation (cough...hot-or-not...cough) Unfortunately

they only publish average evaluations, without showing the underlying distribution

of scores. Ahh, the things I do for science...

 

I'm not sure what you mean by "Do they stop the study in process and completely

revise their approach? If so, that could translate into a large amount of

irretrievable time and effort." Does that translate into "Hear no evil, see no

evil" ? I'm not making fun, you have a valid point. If at the end of the day Mac's

admistration can point to some metrics where Mac grads are at or above national

averages (and I'm sure they can), they can turn around and claim that their

selection process works.

 

Even though AdCom is probably a little freaked by the number of applicants, a

deep (high-quality) applicant pool works to their advantage. Although Mac would

like to interview the 400 *best* applicants, the pool is deep enough that even if

they were throwing darts to select interviewees they might get 400 sufficiently

good applicants. (pardon the mixed metaphors.)

 

On the whole I agree that Mac is doing the best they can with the resources they

have available. I'm mildly curious about how much variability there is in the

essay scores, but I'm glad Mac spends some extra effort up-front, and that they

seriously attempt to select interview candidates based on personal characteristics

and "fit" with the program, as well as GPA.

 

Happy weekend!

 

pb

 

 

-----BEGIN PGP SIGNATURE-----

Version: GnuPG v1.0.6 (OpenBSD)

Comment: For info see http://www.gnupg.org

 

iD8DBQE98QjX/HNgbK3bC2wRAoUGAJ4gOmG1WtUTrK7KHIjY35NBH+g4cQCfcZ+L

XdhktBmH/EBkWcovTOJy8/o=

=BBe4

-----END PGP SIGNATURE-----

Link to comment
Share on other sites

Guest ploughboy

-----BEGIN PGP SIGNED MESSAGE-----

Hash: SHA1

 

Hi gucio93,

 

Thanks for all the information. Mac med students' participation in this forum

is a good reminder that there are real people behind the Mac eval process, it's

not just a giant premed-eating machine.

 

By the way, I don't know if the humour was intentional or not, but one of your

posts made me laugh. Only a med student would say (paraphrased): "Evaluating

essays was 80 hours of work, so it took me almost four days to finish." That's

the one thing that really scares me about meds - sleep.

 

Seriously, it's great to know our essays are being read thoroughly, and aren't

just gone over once lightly.

 

Two final questions, of the sort that I should ask next July when I'm

re-writing my application (smile). Since you're here and answering questions

now, I'll ask now. Thanks again, this is great!

 

1) Intra-application variation: I imagine that an applicant who can *really*

write will shine on all the questions, but there's probably some variability

within most other applications. Do you find that most of the "really good but

not totally awesome" applications are of uniform quality throughout, or are they

carried by five or six of their answers?

 

2) Related query: are there certain questions that a lot of people often trip

over? Your comment about evaluating applicants ability to follow directions makes

me ask this. There was also an old comment from a Mac evaluator giving a list of

common problems with applications. Right after "spelling and grammer" came

"not answering the question". I can think of a couple of questions where this

might happen, accidentally or deliberately.

 

TIA,

 

pb

 

PS Congratulations on 200 posts!

 

 

-----BEGIN PGP SIGNATURE-----

Version: GnuPG v1.0.6 (OpenBSD)

Comment: For info see http://www.gnupg.org

 

iD8DBQE98Qmz/HNgbK3bC2wRAmLVAJ90PBHvxSvUEHuCJ5I9w6sHHtnS+QCaAisa

nqQOsVSoO9UvXUy0Fejb7MM=

=bWdA

-----END PGP SIGNATURE-----

Link to comment
Share on other sites

Guest K2Optimist

When I tell people about the Mac admission process I am ALWAYS asked about this mysterious "community member"

 

Could someone please define "community member" for me? Does "community" refer to Mac or Hamilton?

Link to comment
Share on other sites

Guest gucio93

Glad it made you laugh ;) .

 

1) Many applications will have a couple of answers that are not as strong as the rest, so when I mark I always try to look at the big picture. That's why I read them more than once; to get a sense of the person behind the application.

 

2) I haven't found that there are specific questions that people have difficulty with; different individuals struggle with different questions. However, I would agree with your comment about "not answering the question" as being one of the major setbacks for some. Certain questions have two parts to them, and certain ones are phrased in such a way that you can write a great general paragraph, but if you haven't addressed the specific point the question is asking, you have not really done your job of following directions . . . I hope that makes sense.

 

;) Apparently you're paying more attention than I am, I didn't realize I've made 200 posts, but thanks for the congrats.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...