Jump to content
Premed 101 Forums
tere

Supreme Court ruling may pave way to identification of Ontario’s top-billing physicians

Recommended Posts

9 minutes ago, rmorelan said:

realistically possible ha - probably not. But you would have, like with, a disconnect at least on some level between various goals of people (rads vs admin). Plus now we have pathology as an example of hose things can go sideways as well and we hope not to repeat it. One of the reasons rads has made so much is the explosion in the need for imaging frankly driving us a bit nuts with volume. THere is just this never ending pool of stuff to read - and never a "light day" as it were, never any down time. The scanners are run to max capacity constantly, with each study getting more and more complex - more recontructions, more slices, new techniques ha. Messy. 

 

 

What are your thoughts on the future of rads in say 10-20 years? Do you think there will still be such a demand in that timespan? I hear a lot of doom and gloom talk about AI as it pertains specifically to radiology. 

 

Rads is something I'm interested in so sorry if this is off topic, you can reply by pm if you want 

Share this post


Link to post
Share on other sites
On 4/12/2019 at 4:30 PM, ArchEnemy said:

I have heard that it is a group of several ophthalmologists (~ 10) billing under one physician.

But why would they be doing that? 

Share this post


Link to post
Share on other sites
1 hour ago, tavenan said:

What are your thoughts on the future of rads in say 10-20 years? Do you think there will still be such a demand in that timespan? I hear a lot of doom and gloom talk about AI as it pertains specifically to radiology. 

The demand for the diagnostic information that imaging provides is ever-increasing. I can't see that humans in radiology are more susceptible to replacement than humans in any other field of medicine (for which there are also AI applications for diagnosis). If there is an error in a machine read, who will take liability? Experiments have shown that AI systems can be hacked, leading to wrong test results. Instead, AI can be better applied throughout all areas of healthcare to help make us more efficient and deal with the increased demand - for example, in radiology, this might include triaging urgent exams and helping pull clinical information.

Share this post


Link to post
Share on other sites
7 hours ago, rmorelan said:

ha, well that is predictable

there is also what you mean by appropriate - right now some people on FFS I think reasonable feel they are moving at extreme volume - that isn't about the money, it is about the pressure from long wait lists and clinical demand. Radiology would actually fall into that category - our output per day has at least by informal analysis doubled in the past 15 years and that isn't from the tech advances. You go salary and you better believe people will slow down - in their eyes back to a sane level which is safer for patients. 

I don't expect any government body to see a reduction as positive mind you ha. 

The ones who got flagged were also seeing less volume than the other salaried staff. Things like 4 follow ups a day at 30 minutes a follow up. Just clear abuse of the salary system.

Share this post


Link to post
Share on other sites
10 hours ago, Lactic Folly said:

The demand for the diagnostic information that imaging provides is ever-increasing. I can't see that humans in radiology are more susceptible to replacement than humans in any other field of medicine (for which there are also AI applications for diagnosis). If there is an error in a machine read, who will take liability? Experiments have shown that AI systems can be hacked, leading to wrong test results. Instead, AI can be better applied throughout all areas of healthcare to help make us more efficient and deal with the increased demand - for example, in radiology, this might include triaging urgent exams and helping pull clinical information.

I find the liability issue interesting. I think there should be a way to build in the anticipated cost of litigation into the business model and then adjust your product premium accordingly. You can probably even set out terms of liquidated damages in your contract when you sign with hospitals. I think if the current model has physicians or hospitals covering malpractice insurance a larger company should be able to figure out a cost-effective way to insure themselves. 

I think if there is an AI product that can perform at an acceptable level then the business case should be relatively easy in comparison to the R&D. 

Share this post


Link to post
Share on other sites
20 hours ago, blah1234 said:

I find the liability issue interesting. I think there should be a way to build in the anticipated cost of litigation into the business model and then adjust your product premium accordingly.

Off-topic: Sure, of course this can be done from the perspective of the manufacturer. I was considering more from the perspective of the party who is making the decision to use AI in patient care, i.e. hospitals. Would they be comfortable employing AI as a diagnostician in its own right (in effect making the hospital responsible for the decision to employ AI in case a worst-case scenario adverse event occurs), or would AI be considered in the realm of equipment (still requiring human input to accept or overrule what AI is doing - thus keeping the liability with the physician). Doesn't seem there is a consensus based on a quick survey of articles out there - legal landscape is still evolving.

Share this post


Link to post
Share on other sites
12 hours ago, Lactic Folly said:

Off-topic: Sure, of course this can be done from the perspective of the manufacturer. I was considering more from the perspective of the party who is making the decision to use AI in patient care, i.e. hospitals. Would they be comfortable employing AI as a diagnostician in its own right (in effect making the hospital responsible for the decision to employ AI in case a worst-case scenario adverse event occurs), or would AI be considered in the realm of equipment (still requiring human input to accept or overrule what AI is doing - thus keeping the liability with the physician). Doesn't seem there is a consensus based on a quick survey of articles out there - legal landscape is still evolving.

most of the current AI commercial products dodge that completely - by saying it is just an augmentation tool. Suing the manufacture in that case would be the same as suing the maker of your PACS because you missed the nodule in the lung despite some tools to help with that (in this case namely the MIP images in lung windows - which help radiologists pick up nodules).

That puts all the liability back on the radiologist, same as it is now. 

That is also how the tools are registered with the FDA for instance. Makes the barrier to entering the market a lot lower and promotes innovation. Ha, I think it will be a while before we take it to the next level. 

Share this post


Link to post
Share on other sites

Interesting discussion.

Hmm I wonder if there are any examples or lessons from automatic ECG interpretations that one can extend to radiology. Suppose a psychiatrist orders an inpatient ECG to assess qt for an antipsychotic change and the interpretation says normal qt, non-specific ST changes but misses an obscure MI or arrhythmia. Who's at fault? Can you sue the ECG machine manufacturer?

I think it's institution specific regarding if/or when there's a formal read for the ECG but there seems to be a lot of trust in automatic interpretations by non-cardiology people. Of course some squiggly lines are easier for machines to interpret than a million slice CT but who knows, maybe one day the technology will be enough to gain our trust (at least for simple "triaging"). 

Share this post


Link to post
Share on other sites
10 hours ago, RichardHammond said:

Interesting discussion.

Hmm I wonder if there are any examples or lessons from automatic ECG interpretations that one can extend to radiology. Suppose a psychiatrist orders an inpatient ECG to assess qt for an antipsychotic change and the interpretation says normal qt, non-specific ST changes but misses an obscure MI or arrhythmia. Who's at fault? Can you sue the ECG machine manufacturer?

I think it's institution specific regarding if/or when there's a formal read for the ECG but there seems to be a lot of trust in automatic interpretations by non-cardiology people. Of course some squiggly lines are easier for machines to interpret than a million slice CT but who knows, maybe one day the technology will be enough to gain our trust (at least for simple "triaging"). 

Machines miss stuff on ECGs all the time. Clinical context is also another important variable. 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×