Jump to content
Premed 101 Forums
Sign in to follow this  
benzo

Future of AI in Anesthesiology

Recommended Posts

Hey Everyone, 

 

I'm not currently in medicine but hoping to land some interviews and hopefully get in this cycle (or in the next few years)

I've always had an interest in Anesthesiology and was wondering if anyone currently in the field or has some experience with Anesthesia could comment on what the general feeling is towards Artificial Intelligence (for better or worse) ?

How much of your practice, say giving drugs during surgeries, can be replaced by AI? 

We're living in age where AI Lawyers might be a real thing in the near future (http://www.bbc.com/news/technology-41829534), so interested to know how it could effect anesthesiology (or surgery, medicine etc...) 

 

Share this post


Link to post
Share on other sites

For anyone interested in AI in medicine, a decent place to start is this blog by a Radiologist and AI Researcher. It's a fairly unique take from someone spanning both ends of the debate. The overarching point I've taken from his arguments is that AI could easily impact the practice of medicine, likely significantly and very possibly in a way that disrupts the physician job market, but it's very unlikely to do so quickly or suddenly, and is extremely unlikely to completely eliminate the need for any medical specialties at any point in the foreseeable future.

Share this post


Link to post
Share on other sites
1 hour ago, Med Life Crisis said:

Canada's investing heavily into AI... Why not?

It - the AI machine already happened.  It didn’t fly since anesthesiologists warned against it... see ^^

Share this post


Link to post
Share on other sites
10 hours ago, marrakech said:

It - the AI machine already happened.  It didn’t fly since anesthesiologists warned against it... see ^^

hm. I think that it will be long before people trust machines to do procedures. However, I think AI can and will soon be implemented for more "diagnostic" tasks. For example, Stanford's melanoma detecting algorithm. It can easily be implemented into an iPhone app the % risk of a lesion being malignant, and then hospital appointment triaging can be based on that. This way, you're not replacing any doctors, and so there should be less resistance from them.

Share this post


Link to post
Share on other sites

Its hard to predict the distant future.  What you need you ask yourself it: is any of this likely to happen in the next 30-40 years?  Even if it gets invented remember it still has to go through a shit ton of safety testing, get bought, become something patients will accept etc etc.  Self drivers cars have existed for years remember and they have clearly not overtaken drivers.

I guess what I'm clumsily saying is at least for my specialty, I'm not at all worried about this happening in the 35 years or so until my potential retirement.  You probably also should not be.  Looking at the above blog, maybe radiology has more to worry about, although they wont be "replaced", just streamlined

Share this post


Link to post
Share on other sites

 

19 minutes ago, goleafsgochris said:

Its hard to predict the distant future.  What you need you ask yourself it: is any of this likely to happen in the next 30-40 years?  Even if it gets invented remember it still has to go through a shit ton of safety testing, get bought, become something patients will accept etc etc.  Self drivers cars have existed for years remember and they have clearly not overtaken drivers.

I guess what I'm clumsily saying is at least for my specialty, I'm not at all worried about this happening in the 35 years or so until my potential retirement.  You probably also should not be.  Looking at the above blog, maybe radiology has more to worry about, although they wont be "replaced", just streamlined

There's a huge difference between a research paper and a proven clinically useful aid and let alone diagnostic app.  I see technology as providing an assistive role rather than remplacement.  That's also the message from the 'failed' anesthesia machine - replacement will face resistance.  Consider something as "simple" as ECGs - automated ECG reading has been around for a while, but it's still far from a gold standard when it comes to interpretation.  I expect similar outcomes will occur in other areas where technology is developed.  

Share this post


Link to post
Share on other sites
2 hours ago, marrakech said:

 Consider something as "simple" as ECGs - automated ECG reading has been around for a while, but it's still far from a gold standard when it comes to interpretation.  I expect similar outcomes will occur in other areas where technology is developed.  

 

Q: "Who programs automatic ECG readers?"
A: "Malpractice lawyers"

(Not an original joke, I think I first heard it in a talk by Amal Mattu)

Share this post


Link to post
Share on other sites
37 minutes ago, ploughboy said:

 

Q: "Who programs automatic ECG readers?"
A: "Malpractice lawyers"

(Not an original joke, I think I first heard it in a talk by Amal Mattu)

Hahaha - well said.  Broader legal-ethico implications are rarely considered.

Share this post


Link to post
Share on other sites
2 hours ago, goleafsgochris said:

Looking at the above blog, maybe radiology has more to worry about, although they wont be "replaced", just streamlined

Radiology has nothing to worry about. Our workflow will be streamlined, like you said, and throughput will increase without compromising diagnostic accuracy. The menial tasks of our job (which I estimate take up about 15% of our time) will be automated. Detection of certain findings like lung nodules or fractures will be automated. Speech recognition will become more powerful. Radiomics will obviate biopsies. AI will change what we do and improve patient care, but won't be replacing us. On the contrary, I think we will see a boom in radiology, similar to that which was seen with the advent of digital images.

Share this post


Link to post
Share on other sites
4 hours ago, W0lfgang said:

Radiology has nothing to worry about. Our workflow will be streamlined, like you said, and throughput will increase without compromising diagnostic accuracy. The menial tasks of our job (which I estimate take up about 15% of our time) will be automated. Detection of certain findings like lung nodules or fractures will be automated. Speech recognition will become more powerful. Radiomics will obviate biopsies. AI will change what we do and improve patient care, but won't be replacing us. On the contrary, I think we will see a boom in radiology, similar to that which was seen with the advent of digital images.

A new golden age for radiology ? Could be exciting times - maybe not for pathology, though.

Share this post


Link to post
Share on other sites
On 2/4/2018 at 1:22 PM, marrakech said:

A new golden age for radiology ? Could be exciting times - maybe not for pathology, though.

I would argue that AI will benefit all branches of medicine, and mankind in general. We should embrace AI so that we can understand and control it, not to fear and avoid it.

Share this post


Link to post
Share on other sites
On 10 février 2018 at 5:42 PM, W0lfgang said:

I would argue that AI will benefit all branches of medicine, and mankind in general. We should embrace AI so that we can understand and control it, not to fear and avoid it.

And here's an article proposing just that (link)...
 

Share this post


Link to post
Share on other sites
20 minutes ago, ploughboy said:

Kind of silly. Even if we agree that gut feeling and intuition are reliable for decision making... they're still based on data. Maybe it goes beyond classic demographics and includes things like what the pt looks like, how they're behaving etc....but that's still data that AI could churn through.

Share this post


Link to post
Share on other sites
4 hours ago, PhD2MD said:

Kind of silly. Even if we agree that gut feeling and intuition are reliable for decision making... they're still based on data. Maybe it goes beyond classic demographics and includes things like what the pt looks like, how they're behaving etc....but that's still data that AI could churn through.

as an AI researcher (can I say that now? that still sounds very dramatic and pompous, ha) I would agree but one issue in medicine is summed up in this wonderful quote I think:

"if someone can say precisely how they are doing something, I can build a machine to do that"

doctors after a point simply don't have a complete insight into how they are doing what they are doing. That is were the gut feeling thing kicks in - radiologists don't really know how they read images, pathologists don't really know how they read slides, and anesthesiologists don't really know exactly how they know someone is going off the rails. There are always things they can point to - and even make up a good story that fits MOST but not ALL of what they are doing. I see it in myself when I read a study - I know something is wrong before I even know what is wrong - its just wrong! We are very good at fooling ourselves into thinking it is all logic, and data, and guidelines....but it isn't. Experience for lack of a better word is the human trait of going beyond all that. 

and it is very hard to build a machine to so something, when we don't know how to do it ourselves.  There is always this point when the highly skill doctor basically says - well I looked at it, and I knew they needed this. Wonderful...but HOW did you do that? 

That is exactly why we shifted into using learning systems now - we gave up on people telling us how they do what they do, and started trying to build a machine that can learn how to do it by itself. 

Edited by rmorelan

Share this post


Link to post
Share on other sites
11 minutes ago, rmorelan said:

as an AI researcher (can I say that now? that still sounds very dramatic and pompous, ha) I would agree but one issue in medicine is summed up in this wonderful quote I think:

Haven't read the rest of your post yet, but YES you can!! Congrats.

Share this post


Link to post
Share on other sites
14 minutes ago, rmorelan said:

as an AI researcher (can I say that now? that still sounds very dramatic and pompous, ha) I would agree but one issue in medicine is summed up in this wonderful quote I think:

"if someone can tell precisely how they are doing something, I can build a machine to do that"

 doctors after a point simply don't have a complete insight into how they are doing what they are doing. That is were the gut feeling thing kicks in - radiologists don't really know how they read images, pathologists don't really know how they read slides, and anesthesiologists don't really know exactly how they know someone is going off the rails. There are always thing they can point to - and even make up a good story that fits MOST but not ALL of what they are doing. I see it in myself when I read a study - I know something is wrong before I even know what is wrong - its just wrong! We are very good at fooling ourselves into thinking it is all logic, and data, and guidelines....but it isn't. Experience for lack of a better word is the human trait of going beyond all that. 

 and it is very hard to build a machine to so something, when we don't know how to do it ourselves.  There is always this point when the highly skill doctor basically says - well I looked at it, and I knew they needed this. Wonderful...but HOW did you do that? 

 That is exactly why we shifted into using learning systems now - we gave up on people telling us how they do what they do, and started trying to build a machine that can learn how to do it by itself. 

Now that I've read the rest of that...I'll respond like an AI pleb, so take it easy on me...

Lots of the AI headlines we see these days aren't about super complex human-designed algorithms, but about unguided machine learning that comes up with a strange way of accomplishing a task, a way that works but doesn't make sense to us. "Gut-feeling", I think, is the human version of that. We've got our own (not completely understandable/logical) algorithm based on the data we take in, but we do a better job at dressing it up with a story.

I'm thinking as I write. Now that I've written it, I'm realizing that it's actually a little strange. Unless we accept that we're basically doing what an AI is capable of (taking empirical data that is available to an AI & processing it through an algorithm that an AI could replicate in theory), we're clamming that we've got supernatural power.

Share this post


Link to post
Share on other sites
42 minutes ago, PhD2MD said:

Now that I've read the rest of that...I'll respond like an AI pleb, so take it easy on me...

Lots of the AI headlines we see these days aren't about super complex human-designed algorithms, but about unguided machine learning that comes up with a strange way of accomplishing a task, a way that works but doesn't make sense to us. "Gut-feeling", I think, is the human version of that. We've got our own (not completely understandable/logical) algorithm based on the data we take in, but we do a better job at dressing it up with a story.

I'm thinking as I write. Now that I've written it, I'm realizing that it's actually a little strange. Unless we accept that we're basically doing what an AI is capable of (taking empirical data that is available to an AI & processing it through an algorithm that an AI could replicate in theory), we're clamming that we've got supernatural power.

and we don't - we are just biological computers running with neurons using the laws of chemistry and physics, and completely duplicatable in theory. 

that is what is interesting about AI research - we aren't 100% sure we can cure cancer but we are very hopeful we can. We aren't 100% sure we can travel in space long distances to other worlds and solar systems, but we are hopeful we can. We KNOW that an AI can be built - we just aren't yet sure how. 

Share this post


Link to post
Share on other sites

The strongest AI (and thus outperforming humans 100% of the time) are the ones built without human intervention. You set the rules, and through a self-reinforcement learning algorithm, the program learns what works and what doesn't by trial and error. This is easy to apply to games for example: you set the rules (chess, go, checkers, whatever) and let the program self-play for 50 million or 100 million games. You then get the best decision maker one could ever imagine for that specific problem. The issue with medicine is that we unfortunately don't have the luxury to let the the algorithm figure out the outcome of its decision (letting it administer any treatment to a patient).

Now, of course you can build IA in a different manner than through strict self-reinforcement learning, but then you need human intervention and human data, which makes the algorithm much less robust since the exploration will inherently be limited to the space explored by humans. Just my 2 cents.

Note: I'm not a proper AI researcher.

Share this post


Link to post
Share on other sites
On 28 juillet 2018 at 7:10 PM, rmorelan said:

and we don't - we are just biological computers running with neurons using the laws of chemistry and physics, and completely duplicatable in theory. 

that is what is interesting about AI research - we aren't 100% sure we can cure cancer but we are very hopeful we can. We aren't 100% sure we can travel in space long distances to other worlds and solar systems, but we are hopeful we can. We KNOW that an AI can be built - we just aren't yet sure how. 

Loss for large-scale AI - IBM's highly visible Watson doesn't seem to have worked that well in oncology, even after billions in investment.  Oncology was an ambitious target though, given the complexity of dealing different data sources (in different formats and not necessarily consistent), evolving science and treatments.  

From an AI perspective, it looks like human intervention was involved in the data acquisition stage - Watson wasn't operating completely autonomously. 

To be fair, it looks like it's helped clinicians keep up with clinical knowledge, but that's clearly well away from the initial aims of the project.  

"More than a dozen IBM partners and clients have halted or shrunk Watson’s oncology-related projects. Watson cancer applications have had limited impact on patients, according to dozens of interviews with medical centers, companies and doctors who have used it, as well as documents reviewed by The Wall Street Journal.

In many cases, the tools didn’t add much value. In some cases, Watson wasn’t accurate. Watson can be tripped up by a lack of data in rare or recurring cancers, and treatments are evolving faster than Watson’s human trainers can update the system. Dr. Chase of Columbia said he withdrew as an adviser after he grew disappointed in IBM’s direction for marketing the technology.

No published research shows Watson improving patient outcomes. 

Artificial intelligence has the potential to reinvent the world, from how businesses operateto the types of jobs people hold to the way wars are fought. In health care, AI promises to help doctors diagnose and treat diseases as well as help people track their own wellness and monitor chronic conditions. Watson’s struggles suggest that revolution remains some way off.

IBM said Watson has important cancer-care benefits, like helping doctors keep up with medical knowledge. 'This is making a difference,' said John Kelly, IBM senior vice president. 'The data says and is validating that we’re on the right track.' "

https://www.wsj.com/articles/ibm-bet-billions-that-watson-could-improve-cancer-treatment-it-hasnt-worked-1533961147
https://twitter.com/AndrewYNg/status/1028750770668068865 (go through thread to read without subscription)

Edit: I saw a talk once by Andrew Ng (twitter link) - he demonstrated his autonomous helicopter flying.. really impressive.

 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  

×