false
Catalog
NANS Webinar 2024- Harnessing AI in Neuromodulatio ...
NANS Webinar 2024- Harnessing AI in Neuromodulatio ...
NANS Webinar 2024- Harnessing AI in Neuromodulation - Transforming Patient Outcomes
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Excellent. Good afternoon, everyone. Welcome to another session of NANS Educational Committee webinar. So, with a series of this year, we try to come up with topics that are more related to novel technologies and the frontiers of neuromodulation. So, today, we have a very novel technology and a hot topic in medical and non-medical arena, talking about harnessing AI and neuromodulation. I'm Dr. Yashar Ashravi. I work at Action Clinic. I'm the program director for Pain Fellowship, located in New Orleans, Louisiana. So, briefly about artificial intelligence, so the exciting point is the intersection of AI and neuromodulation can be very promising and even reshape the landscape of pain management, even beyond that. So, there's a lot of potential for AI, and we see it in so many different aspects of academic world and medicine. At the same time, with this exciting potential, there are significant challenges and potential technical and ethical negative points that we need to be cautious and have a deep understanding of that. To further explore this complex issue, we are privileged to have distinguished panel today with us. So, we have wonderful speakers, starting with Dr. Ryan DeSouza, a very known face in neuromodulation and pain society. He is the associate professor and director of neuromodulation at Mayo Clinic. He will address the challenges and ethical consideration of integrating AI into neuromodulation. And this will set the stage for our next speaker, Amit Atala, who is an engineer, and he will provide a broad review of AI's current landscape and outlook in healthcare. We will follow that with Dr. Voloshinek, who will dive into the role of AI in neuromodulation, sharing some promising clinical evidence and research findings. And at the end, Dr. Christopher Robinson will explore the future directions of AI and potentials and its possible applications in neuromodulation and even beyond that in chronic pain. So, we encourage all of you guys to submit your questions in the Q&A part in the chat box, and at the end, we will try to address all of them. So, without further ado, let's begin. Dr. DeSouza, the floor is yours. Perfect. Thank you all for having me. Again, I'm just sharing my screen. Give me a couple seconds here. Perfect. So, again, thank you. Thank you, Yashar, for that extremely kind introduction. I'm very honored to be with a group of wonderful panelists today. It can stop me at any point as I'm giving this talk. I want us to be interactive. So, again, my name is Ryan DeSouza. I work at Mayo. I'm the director of neuromodulation, as well as the inpatient pain service, and also honored to recently be on the board of directors for NANS. Congratulations on that. I forgot to mention that, Dr. DeSouza. Well deserved. Thank you. So, again, this is a great panel today on artificial intelligence, and my talk really today is geared on really talking about the ethical considerations and challenges of incorporating AI. So, I'm kind of giving a different kind of perspective that the other panelists will give. Again, like I said, we have a wonderful panel today that will really talk about the benefits of AI, but I'm kind of giving the other side, the flip side of AI, really talking about the potential challenges, the ethical considerations, and I'm also going to supplement that with some research implications, clinical implications, and especially cybersecurity issues. But before I kind of continue, I don't want you guys to think that I'm anti-AI. I'm actually a huge advocate of artificial intelligence. I think, honestly, AI will take over every sector of the world, and it's already having its impact in healthcare. If we don't adopt AI, we're going to be left behind. So, to those that are very skeptical about AI, I would just caution and say that, yes, it's okay to be skeptical. There are some downsides of AI, but everybody else will adopt it at some point, and they're going to be at an advantage. And like I said, AI has a huge endless potential in healthcare. It could be incorporated into precision medicine. It could be utilized to make real-time adjustments to therapy. It could even give patients the ability to make those adjustments themselves. Obviously, there could be potential for improved outcomes, reduction in complications, and then also predictive modeling. So, patients being able to predict how well they'll do and respond to a specific therapy. So, one of the problems with incorporating AI, so I'm going to talk about three big buckets. The first bucket is on technical challenges, and that is, can AI make accurate decisions? So, there's a study that was recently published in the RAPM Journal, and what they were looking at was, they gave CHAP-GPT 10 different scenarios of patients who are going to get regional anesthetic blocks, and they were on various anticoagulation regimens. And what the prompt was asked to CHAP-GPT was, how should we manage these patients' anticoagulation? And their accuracy, the way they measured it was area under the curve, but basically, they were as good as a little few points over flipping the coin. They were basically as good as flipping a coin. So, essentially chance, and that's how good the CHAP-GPT was in coming up with a correct decision for patients on anticoagulation. So then, what the authors next did was, they trained the model to see what the ASRA guidelines were like in terms of anticoagulation, and said, well, let's give this AI the same exact scenarios and see if it does any better. And it did do better, but the area under the curve was only 70%. So, is that a high enough accuracy for detecting these, you know, correct clinical decisions? Can you trust an AI if they're only doing 70% accurate clinical decisions, especially in something as important as, you know, managing anticoagulation for neuraxial blocks or regional blocks? So, I would say, obviously, the answer is no. So, that's kind of one potential issue in terms of ensuring data quality and integration. Another potential technical challenge is just looking at the variability in data sources. So, AI has to be trained. It has to abstract its data from a specific source. And there's a lot of different databases out there that have millions and millions of data points. For example, CMS, that's your Medicare kind of source, your government payers. We look at a Pearl Driver Mariner database. There's a National Health Interview Survey database. A lot of different databases that are trying to get a lot of data points, but looking at different parts of the population. And so, if you're abstracting these data from different sources, you might have different variations. And then, based on which AI is getting trained in terms of which database it's abstracting its data from, it could potentially deliver differences in terms of decisions. So, that's another thing to kind of keep in mind. Another kind of thing, with AI is that you need to have enough data to train it. You can't just train it with limited data. And as we all know, in pain medicine, neuromodulation is a very important modality that we have used. It's essentially revolutionized the pain field. But not every patient is getting neuromodulation devices implanted. A very small percentage of these patients are getting neuromodulation devices implanted. And so, you have limited data to train these AI algorithms. And the more data points you have, the better you're equipped to making those decisions. So, that's another kind of technical challenge to keep in mind. Again, going to the point that I was making earlier, that I'm not sure if AI is quite ready yet. It will be ready, but not just quite yet. We also know that another technical challenge is that we have a multitude of device vendors. We're not going to mention the specific device vendors today, but you all know there's many of them, and they all have competing interests. So, having one specific AI software to then integrate into different types of hardwares and different companies, and then kind of trying to navigate the different competing interests might also be another potential technical issue. The second big bucket I want to highlight is ethical considerations. And so, that really goes down to, number one, patient privacy. Whenever you're handling digital data and electronic data, especially when it's being incorporated into artificial intelligence, there's always a potential for breach of patient privacy and confidentiality. And there's many different scenarios of this that has been documented in healthcare. And this really brings up a bigger issue, and that's cybersecurity. Now, cybersecurity is not directly related to AI, but whenever we're talking about digital health, it would be remiss of me not to mention cybersecurity. So, cybersecurity essentially entails safeguarding against computer networks and digital information from being penetrated in terms of having malicious kind of damage or disruption to that data. And really, healthcare is a very prime target. I was very surprised when I was looking, you know, when I was researching this, over 90% of healthcare organizations have been targets of cyberattacks. There was a big kind of probe that was done in 2009 where the VA found that 173 of their medical devices were infected with hardware. And as a result, they had to isolate and recall 50,000 medical devices. That's a huge deal. But importantly, in terms of cybersecurity, cyberattacks can actually be utilized to even do patient harm. So, let me give you two very notable examples. What if a cyberattack is performed in which the hacker manipulates a patient's ICD that they use for their arrhythmias or heart disease, and they deliver an inappropriate shock? Well, that could lead the patient to a fatal arrhythmia and hence death. Let me give you a different example. Let's say a patient has diabetes. They have an insulin pump. What if an attack, a hacker hacks into their device, delivers an inappropriately high dose of insulin that could also result in death? How does that relate to neuromodulation? Well, we're talking about hardware as well here. Yes, there's, you know, there's breach of confidential data. If things are kind of in the digital format, there's potential for identity theft. But what about ways to potentially lead to patient harm? Well, the hacker can potentially turn off the device. That's one way. Another potential way they can lead to harm is, let's say they change the spinal cord stim or the DRG stimulation settings, where they, you know, rev up the amplitude, and that could lead to unpleasant dysesthesia and potentially even nerve injury. What if the patient has an intrathecal pump? The hacker can potentially deliver a large bolus, and that could lead to drug overdose, respiratory depression, and obviously death. So, these are some very notable examples of how it could directly relate to neuromodulation as well. In terms of other ethical consideration, bias and fairness also applies to AI. If there's multitude of studies just with human clinical care, not AI, but just human care, clinician care, where there has been discrepancies in terms of racial biases, biases based on sex, biases based on socioeconomic status. If we're training AIs to, if we're inputting data from human experience to AI, well, there's a potential for the AI algorithm to also be racist. The AI algorithm can also be sexist. So, that's also something to take into consideration. There's also already research coming out showing those biases from AI algorithms. So, let's say if we have a sick or black patient, that patient may potentially be assigned the same risk by an AI algorithm as a white patient, and then the algorithm might potentially and falsely conclude that those black patients are healthier than an equally sick white patient. So, something to keep in mind there. This is probably one of the biggest things I want to highlight, and that's just transparency and accountability. Let's say if an AI makes a decision, and it leads to a huge, you know, complication, that resulting morbidity and mortality, who is to blame at that point? Is it the neuromodulation vendor that incorporated the AI software, or is it the AI creator, or does it go to the prescribing clinician? You know, there's a lot of these nuances that are still being handled and defined by a lot of authorities, and so this is one area in the future that will definitely need to be defined in terms of ethics. Let's briefly talk about patient and clinician considerations. You know, not every patient trusts AI yet. There's studies that show that patients are actually not really willing to choose AI unless their physician tells them to, and same thing with physicians. You know, there are a lot of physicians who are not yet ready to adopt AI into their practice, and let me just kind of give you a different perspective. So, in terms of clinicians agreeing with each other, there's a lot of differences in the ways we treat patients. You know, if I were to put a room full of 10 pain physicians and ask them, hey, how would you treat painful diabetic neuropathy, or hey, how would you treat low back pain? You might get 10 different answers, all right? So, if we're getting 10 different answers from humans, how can we expect there to be consensus with an AI? Even if we have all these different guidelines that come out, but there's always discrepancies and differences between them as well. So, if you're having discrepancies with human decisions, and you're using that decision to train AI software, well, that kind of can potentially come at a crossroads as well. So, again, this is a very brief talk. I wanted to really highlight the other side of AI, really mainly its drawbacks. A lot of my, you know, wonderful panelists here, including Dr. Robinson, Altai, and Wolonsky, will be talking about the huge potential of AI to advance the field. Again, I'm not anti-AI. I'm very pro-AI. I'm a huge proponent of it. I always say that if we're not going to adopt AI, you know, I think we're going to be left behind. But, again, I really caution everybody of the potential drawbacks of it. I don't think it's quite yet ready for widespread implementation, but I do think that might change in the future. In terms of if you guys ever want to contact me, here's my contact information, and I'll be happy to take any questions either after this talk or at the end when all the panelists are done. Thank you. I think you're muted, Dr. Ashragi. I was saying it was a beautiful picture. I bet you took it in NANS last year. Oh, thank you. And they're taking professional pictures. So, you know, very essential component in critical discussions that you, in points that you brought up, Dr. D'Souza. I always, you all know that this is inevitable. The AI will concur the field of medicine eventually, including neuromodulation. But you have to be prepared, and we have to let it happen in a controlled manner and a safe manner for our patients, like any other technology, that we make sure that it's tested, it's reproducible data, and we'll make sure that it's non-biased. As you mentioned, we always know that it depends on what kind of data you feed to AI. You can have a biased data fed to the AI and can give you a racist data outcome out there. And reliability, I always kind of like feel Tesla will tell you they have autopilot, but if you had an accident, whose fault is that if you're on autopilot? Is it Tesla or you? Obviously you, but Tesla is not going to get the blame for it. So, great points to bring up. So, that's a good stage to set up for our wonderful engineer, Ahmed Atala, to tell us what's going on out there, what we have available that it can be actually utilized in neuromodulation and out there for in artificial intelligence. All right. Yeah. Thank you, Dr. Ashragi, for the introduction and thank you, Dr. D'Souza, for setting the stage. I think a really important context, a lot of great points. So, my name is Ahmed Atala. I've been working in AI research and engineering for more than the past, like more than eight years. Working in technology industry, not a physician, but yeah, so I'll try to give an overview of AI in general and some of the future trends. Starting with, I think it's fair to say there is a ton of excitement for AI nowadays, but I'd like to highlight sort of how we got to this point and really how quickly we've gotten to this point. So, AI as in academia, I think they've been working, started working on it since maybe the sixties, but the current state of the art, I think the path towards that started only about like around 2012, there was this important paper referred to commonly as AlexNet. Essentially there, they really demonstrated that deep neural networks, as they're called, they're more than just an interesting theoretical concept, but they're like a very promising path towards AI, like what people have been trying to achieve. So, from that point on, a lot more interest in academia and industry kind of went into neural networks as a flavor of achieving machine learning. About five years later, there was another inflection point, transformers or attention, which has since become the standard architecture and approach for state-of-the-art AI. And nowadays, like any of the most capable models, they all use this under the hood. And that takes us, after another five years from there, in 22, probably the big coming out party of AI, where it gained a lot more mainstream interest with chat GPT and nowadays there's other alternatives, but essentially large language models, which the difference with them is that you have now fused single models that have many capabilities, they can do many different tasks, and typically you don't need to give them new data, they already have capabilities. From the perspective of working with text, they can do translation, question answering, and different capabilities. But yeah, if we step back from the hype about AI and then just think about the way AI models are typically trained, or at least the traditional way of training them, is you start with the data sets for, let's say, there's an application that you have in mind or a problem that you're trying to solve. So you come up with a data set that represents this problem. So inputs, the type of inputs, and then for each one, you have a label. So this is a simple example. You're trying to understand if a piece of text is positive or negative, if it's happy versus negative or sad. And the traditional way of doing this is that you come up with inputs, as many as you can. And typically, the more examples you can come up with, your results depend on it. And then through a process of supervised training, which is really just different techniques of training the various types of AI algorithms. And that's very well established at this point. And you get to an AI model that can replicate this problem. And this has been from the point in 2012, I mentioned, up to about four or five years ago, this was like, you do this repeatedly for every single type of challenge that you have. The recent huge capability that has been introduced is large foundation models. And I mean, most commonly, they're large language models, but in general, they could be not just for text. So they're referred to as foundation models. The idea here is that instead of training for every single problem, you train a model with its own data. You start with large models that are trained on enormous data sets that are general across different domains and fields. And then by doing this, you create this, we call it a foundation model. It has capabilities that allow it to, well, on its own, it has what's referred to as zero-shot capabilities, which means it can solve some problems even without giving it any additional data. It can translate, it can question answer if you're working with other type modalities. It can solve problems relevant to them. But then the other important advantage of this is that now starting with a foundation model, you can fine tune it for a specific problem that you have in mind. You can fine tune it with a much smaller data set. So instead of requiring, basically now you can achieve similar or even better performance with a much smaller data set by leveraging the intelligence that was sort of built into the large model from the pre-training. On the flip side, with this sort of approach where basically now you have big, expensive, power-hungry models and probably most importantly that they're complex and it's more difficult to predict exactly what kind of output they will give. So if you're trying to guarantee, if you're trying to have a very high bar for giving reliable answers, correct answers, it's become more and more challenging to predict what this model is going to do for every type of input. I think my examples so far have been focused on sort of text, but in neuromodulation and other medical applications, you might have different input modalities. So predicting, for example, patient outcomes, typically it's like you have attributes of some tabular data and attributes. Another example would be like time series, there's like sensor data maybe coming from hardware that you're using. And the difference is like a lot of what I mentioned is applicable, but the main differences are now the underlying model changes, like instead of being something that's capable on text, it can handle tables or time series. The trend of being able to use a large foundation model is not as developed as for text. I would say for time series in particular, there is an increasing sort of trend nowadays to try to train a foundation model that's very good at time series, and then you can adapt it for your problem. The effectiveness is not as well established as for language problems, but this is an increasing trend. But if you cannot do this approach, there's always the traditional way of, if the problem is important enough and valuable enough, I can try to come up, like find and record and get data as much as I can, and then build a model, train a model in the traditional typical machine learning approach. And the results that you get are usually in line with the quantity and quality of the data that you can get. And yeah, I think one thing too, another trend that's going on these days and touching upon what Dr. D'Souza mentioned on privacy. So there's an increasing trend of AI becoming powerful, and then they will sort of become more personal. They have more knowledge of holistic information about the individual, and maybe that will lead to better results. So trying to keep in mind privacy perspectives on approaches, trying to build and deliver like small models or embedded models. So these would be good for privacy because they might run, you're not sharing your data with a cloud or some third party. The downside typically is that performance suffers because they're not as big and powerful. And I think the trend nowadays actually is to try to sort of do a hybrid of these two, where like some things are done on small personal models, and then when needed, you offload to complex large models and try to balance between the privacy and performance considerations. So yeah, I think there's probably a lot more that we can talk about, but that's the points that I'll make in the slides. And yeah, I'm happy to connect with anyone if you have any questions later on. Yeah, that's all. Right, wonderful. Amit, I really enjoyed it, like your technical perspective in the capabilities and potentials of AI. Regarding the zero shot, like it might've changed since like the last time I've been, I'm not a AI specialist, of course. But initially when it came out, like the main topic was that what came as chat GPT or other open source AI for like resources out there, they are kind of glorified Google search. They are not really, they don't have really zero shot capabilities. They don't have decision-making potentials. And that's why their discussion was that initially a lot of investments came into this type of industry and because the outcome was not as exciting as a lot of them actually went away. And there was a lot of disappointment. Do you have any points on that before we proceed with Steve's presentation? I think you're absolutely correct. There's a limit like under the hood, the way these things are trained, they're sort of trained to predict the next word. Some refer to them as stochastic parrots. There is a limit to like, they're not capable as in terms of store of knowledge. I think they might sort of sometimes appear to know things, but that's probably not the best way of using them. But for example, with, so if you want to use it for question answering, the typical approach nowadays is to do sort of a search step or like the Google search step. And then you come up with a hundred, let's say 10 or up to a hundred results. And then you tell the model, like, here's the results that I found on this topic. And then the model can do usually a good job of like summarizing them and actually coming up with the main insights that you care about. I did mention, there's, they're not very always, not always going to give you the right answer, but yeah, like you wouldn't want to depend on just the model to give you answers. You usually have to give it additional inputs for sort of things like question answering. Excellent. Excellent. Thank you. Now we are move forward with another great presentation, Dr. Stephan Walachnik, that he's going to talk about the AI and neuromodulation and some clinical evidence around it. All right. Podium is yours. I'm very excited to learn more about the current evidence. All right. So yeah, my name is Steve. My background is essentially I'm in anesthesia, but also I have a background in engineering. So maybe I can kind of bridge the gap to some degree, but I'll just be talking about some of the evidence that's kind of at the forefront of the AI revolution that we can see in pain, clinical medicine, and how that applies to neuromodulation. So my objective is essentially is to just go over some of the current areas that machine learning and AI are useful in clinical research, because that's relevant to any type of research domain when clinical data is available. And then from there, I'll try to essentially associate it with where neuromodulation is heading. So in machine learning, as we just recently heard, there's a couple different, I guess, strategies to leveraging it. One of the ways is classification. And then another is obviously the natural language processing, things that we heard recently. But how that applies to clinical data, it can vary. I think the lowest hanging fruit in terms of ways we can leverage AI machine learning for clinical data, and hence identifying potential methodologies for using different types of treatments and identifying patients who would be good responders and the like, the ways of doing that would be subgroup identification. And that's kind of like the bread and butter of basic machine learning. There's a couple of flavors of this. One of the most common things you'll hear these days is natural language processing. That's the umbrella term that ChAT-GBT and transformer models fit into. But essentially there, we're just parsing raw clinical text to extract features. ChAT-GBT is obviously using it more of a classification strategy. So you're trying to predict a word given an input. But other than that, you're essentially learning features from the data that allows us to do that. And those features can be used to disentangle complex distributions of samples. They could be clinical samples of human subjects, or they could just be clinical notes, texts, and they could group them accordingly to find structure or underlying structure in that data such that you could use it to infer interesting traits within that study population that you then target with different modalities, research methodologies, or ultimately for treatments. Unsupervising learning is another umbrella term that's associated with that. That essentially means using machine learning to identify or understand interesting structure in your data without the use of labels. So in this case, you would have 1,000 samples. Those 1,000 samples might be clinical texts, but you don't really know which one of those clinical texts represent a sick patient from a not or from a healthy patient. But you're nevertheless going to find an underlying structure from certain types of algorithms you may use. And certain types of data sets are useful to leveraging these that can help facilitate identifying interesting subpopulations of patients that could be used in different types of downstream applications. And that would ultimately be getting data from omic data. So anything from genomic data for like GWAS studies through metabolomic studies to RNA-seq sequencing studies. But these are different methodologies that we can use to disentangle underlying structure of study populations and then ultimately group them accordingly and target them accordingly. And then obviously the thing that we just heard recently is like classification and prediction. That's another obvious use of machine learning and AI in clinical domain. We have a bunch of inputs and then we could ultimately predict whether or not those data are sick or healthy, or if they ultimately will be sick or healthy, or what the next blood pressure measurement might be on a time series sequence of data. But ultimately, there are different things that we could apply to data that we ultimately get in the pain domain. So current work that's applying machine learning. So in more in the pain domain, things that we are leveraging are ways of getting inputs. And those datasets and those inputs can be anything from fMRI to EEG to PET scan data. Beyond that, a lot of these models have been shown to be useful for developing decision support. There's been studies that show that decision support via machine learning can be obtained for lower back pain treatments, also for addressing musculoskeletal pain. And then we also have been using in the machine learning domain, in the machine learning, with machine learning, we've been leveraging like facial imaging information, ultimately to help facilitate detecting pain from like grimacing and different types of pain, I guess, facial expressions. And then from there, obviously, like I just mentioned, subgroups and features can be extracted using machine learning that can help us find subgroups that are associated with different types of pain modalities. And obviously, video sequences can also be used as well for pain classification. And these are all common in the field right now. For NLP specifically, there's been a variety of things. A lot of things have been parsing text. There's been a lot of risk assessment in terms of pain treatment using text-based inputs, things that have been scraped from like Twitter and the like. And then also just, I guess, evaluating EHR data to essentially classify particular subpopulations of patients as to being a certain pain treatment responder or not. And then also like predicting placebo responders and the like from using NLP with different types of text data. In terms of the omic approaches, I sort of went over this. This is just a nice cartoon that kind of demonstrates that. But leveraging a lot of the different types of omics out there right now is a way of identifying these subpopulations of patients. And then ultimately, these could be used for targeted approaches for treatment development or just essentially identifying who could be a potential responder for a particular therapeutic or, I guess, neuromodulation modality. Now, there's a bunch of different types of work right now that's leveraging omics in the pain domain. A lot of things are using genome-wide association studies, but also just gene expression data has also been shown to identify chronic pain subpopulations, as well as leveraging metabolic pathways with metabolomics. And then from there, you're obviously using machine learning algorithms, everything from as simple as a linear regression to something a little bit more sophisticated, like a support vector machine or a neural network, and you're using them to classify these really complex data sets that tend to be quite difficult to deal with. And machine learning comes up pretty useful in that regard. In terms of neuromodulation, I guess the most common or one of the more frequent, I guess, types of entities we find in neuromodulation is our spinal cord simulators. And they fail upwards of 25% of patients, and a big question is why, right? So again, that kind of circles back to what I've been saying. You can kind of disentangle that before we get into the big picture, I guess, AI revolution, but you can just disentangle it by using machine learning strategies to identify subpopulations of patients to kind of hone in on, to ultimately lead to additional research or find other types of biomarkers that may explain why they fail. And then from there, we can just develop, I guess, better strategies to disentangle that whole entire situation. And that's the start. But beyond that, work has demonstrated the use of signal processing and machine learning for interpreting neural signals. And we've seen this for decades in the brain-computer interface work that we see with neuroprostheses and amputees. So obviously, there's work that's showing that we can leverage machine learning, natural language, I'm sorry, machine learning and signal processing to disentangle these complex time series signals, and then use them to, I guess, pipe into a sophisticated model to create either a classifier to detect biomarkers that can be used for device design, or I guess, other than that, just to predict the next component of a signal. And then using those types of strategies, we've been able to do really interesting things in brain-computer interfaces. But obviously, that should extend to any type of strategy for being applied to neural signals for building a particular model that has an intervention at the end of it. Patterns can range from invasive to non-invasive in neuromodulation, or patterns that we extract can be obtained invasively or non-invasively in neuromodulation. And it could be anywhere from using an EEG or an MRI or an fMRI to using something like ECOG. But nevertheless, the signal quality is going to vary, the noise is going to vary. And then with the more noise there is, the more difficult it is to extract a worthwhile signal. But oftentimes, the less noise there is, it means that it was a little bit more invasive. So that becomes a problem and a hurdle that we need to balance and weigh going forward when we're trying to take these signals and use them for device design and intervention design. But nevertheless, these extractive features can be used to optimize feedback. And that could be anywhere from – that could be in something as simple as just reducing pain signals or perception. We can take these signals. We can have a bunch of channels to extract these signals. We can apply some filters and some signal processing to extrapolate these signals. We can train them on a particular subject. And then from there, we can ultimately identify some biomarkers that are useful within those signals and use them to make adjustments, tune them accordingly. This is essentially the foundation of smart technologies that we're seeing, and with the spinal cord simulator domain and the like. And these feedbacks can be real-time, or they can just be used to kind of put on the back burner to be used to be tuned down the road when the patient goes back in to ultimately see how their device is working. So they need not necessarily be real-time, but that's an option. But that's the idea. Now, these biomarkers can – if the signal's good enough and we can extrapolate them, the biomarkers themselves can tell us anywhere from body position in space, and we can measure that from an accelerometer, shifts in the spinal cord position. And that could be useful for disentangling whether or not there's cardiovascular or respiratory changes. And then also, they could just be biomarkers from the neuronal signals themselves. They could be action potentials coming from alpha beta fibers, and they could be associated with pain suppression. They could be, I guess, signals coming from neurons that help us extrapolate whether or not repetitive neurostimulation occurred. And then also things like local field potentials and the like in certain parts of the brain we can capture as well. These targets, once we have these biomarkers and we could kind of use them to pinpoint some targets. And then there's been a lot of research thus far that has shown that there's different types of neuronal patterns and neuronal areas that can be targeted for different types of pain treatments or theoretically pain treatments. So like asynchronous stochastic burst have shown some efficacy in overcoming neuronal adaptation. There's differential target multiplex stimulation, and that's just essentially targeting different types of neuronal glial cells. And then the type target and the frequency of the neurostimulation can ultimately impact the efficacy and the like of the impulse that you're delivering. And again, this comes back down to you have to train these classifiers to disentangle those relationships and then ultimately allow you to facilitate or send a sophisticated signal that can ultimately be efficacious. And this is one of the problems that is kind of piggybacks on what was mentioned earlier about having enough data. I think in this domain, it's not necessarily just having enough data that's the problem, because obviously you need to train these models, I guess on an interpatient standpoint to kind of get the averages of what the models themselves should be behaving like. But also, people's physiology varies quite significantly within themselves. So these models are gonna have to be trained within a particular subject as well to find maximal and minimal thresholds that these impulses and these biomarkers can be measured from. And then from there, they're going to have to be optimized. So there's a couple of layers of training and a couple of layers of enough data that we need to ultimately maximize and optimize, I guess, optimize the way in which these models are going to learn. So it's a complicated problem. And I by no means disagree that we, or I definitely do agree that there's gonna be some time before we have enough data and good enough, I guess, strategies to ultimately disentangle what's going on and take advantage of the machine learning models that can certainly learn from the data that data did in fact exist. But that's essentially everything that I have. So thank you for listening. Excellent. Steve, it was a very good presentation. I think opens up our minds that what are the potentials and kind of things that neuromodulation is utilizing? And as you said, like maybe in the future, instead of just using AI to answer the messages or maybe like for patient training purposes or like creating patient instructions or answering simple questions, there will be a AI model, basically the same kind of like compounded with your IPG that will continue to learn from the spinal cord, like evoke potentials or any other biomarker and gradually we'll find the best regimen for the patient. And based on the generalized data that we have from the past or from the patients like previous data, it will tell us that what's the good regimen to start. And maybe we'll say like a CRPS regimen should start with neuromodulation with this dose and with this type of waveform. That's the predictive model kind of shows you and then you kind of like cater it to the patient needs over time. So it's very exciting. So I'd like to take this to Chris Robinson who's gonna tell us, what do you think future will bring us with all these potential capabilities and limitations and the risks that AI will see or expect in the future for AI and neuromodulation? Thank you. Can you hear me? Okay. So I wanna say thank you Steven and thank you Yashar for kind of segwaying into kind of the future of neuromodulation. So we kind of already are there. And so we all have these subjective pain scales which are incredibly variable. And yes, pain is different from one person to the next person but we are now developing a model to create an objective way to measure pain as a way to predict pain. I won't go into the details of it but our model is currently being developed and it'll be validated later next year. Now, one could say, okay, well, every person is different but I'm gonna take you a little bit back about a century and a few decades ago when we were unable to measure a heart attack. So in the early 1900s, there was no way to measure heart attack other than just someone having pain. Then in 1909, they measured the first arrhythmia. Then in 1910, they measured the first heart attack. The only way we were able to truly understand what is a heart attack, a STEMI or an NSTEMI is that they had to collect enough data to say, hey, when the SC segment is elevated or depressed past a certain threshold, we know that it is a STEMI or NSTEMI. Now, when you have pain, it is simply an electrical signal being passed from the point in the periphery up to the brain. So who's not to say that we cannot measure that same signal and be able to kind of decode a language of pain. Sometimes we have our grandmothers or our grandfathers would say, oh, it's raining, my knees are about to hurt. There are some people who can say, hey, my back is about to, I'm about to get back pain. We have all this data out there. Now with utilization of AI, we are able to kind of now collect that data and develop a predictive model. So now what is the future of neuromodulation? If we can predict pain, create and validate a model for it, tied into neuromodulation, we can now remove the individual, we can now remove the user, sorry, it's not advancing. We can now remove the user from actually interfacing with the neuromodulation device. So for example, if we can now detect, predict the patient is about to be in pain, you can either activate or change your programming for neuromodulation. That in itself is a game changer, okay? And now that we have adaptive neuromodulation in the model that we're building, we have to look kind of what is the next step, right? We already have devices in the spine as well as the periphery and there are some companies that are actually working on actually functional neuromodulation. So for those who have incomplete spinal cord injuries, there are animal studies that demonstrate that if you activate the spinal cord, you can actually kind of regenerate or start to form new synapses. Now, who's not to say that once we have already this predictive model, we've already kind of mapped out the different pathways in the spinal cord that we can stimulate, who's not to say that we can now make people walk again? So when we take a bigger picture, the first phase is to predict pain, right? Create an objective form of pain. Then we tie that into neuromodulation such that it now learns your behaviors, right? It learns, it'll personalize it for you. But at the same time, we can say that we're all unique, but there are similarities in pain syndromes or symptoms between patients, right? So then now we have adaptive neuromodulation and from there we have functional neuromodulation. So there will come a point in time in which we can utilize artificial intelligence, neuromodulation, to be able to take that patient who has a spinal cord injury and be able to get them to walk again. So now our team is now developing that model for phase one. And I truly believe that within the next maybe 10, 20 years, our current technology will be utilized for something much bigger, much better. So look forward in 2025 for some model on the prediction of pain, as well as making an objective. So I know it's quite late and I wanna open up the floor for any questions that anyone may have. Hey, Chris, I just wanted to say, excellent. I loved the part that you brought up about the functional piece, especially patients being able to ambulate those, especially those patients with spinal cord injuries. It's a great objective measure, especially now in spinal cord stimulation. There's a lot of insurance companies, a lot of people kind of attacking the field in a way. And so if we actually have objective evidence, like, hey, this is actually working, it's actually, could actually lead a patient to walk. I think that's a great kind of next step. We're actually running clinical trials here at Mayo, started last year, and these patients are getting DRG leads and dorsal column leads, and they're able to walk. These are patients that have had spinal cord injuries for decades, they're paralyzed. And immediately in the PACU, as soon as they get the leads placed, they're able to walk immediately in the PACU, which I think is gonna be like a revolutionary thing for sure, especially once it gets FDA approval and stuff. Right now, it's still in the phase two trials, but I think it'll be a big deal for sure. That's awesome. That's actually exciting. Like, I'm sure you guys are aware of this study. It was published in 2023. It's the first, like, a few patients in nature. It was like, it was like a patholese they placed, and with computer modeling, they made these patients were paralyzed, even for a long, long time, to start walking and do normal moves. So imagine if the AI gets into that and continuously basically programs the patient to certain activities and predicts their future movements. A lot of potentials. And obviously, there's a lot to explore. There's a lot of engineering components that maybe Ahmed is kind of like saying that, yeah, it's easy to say, but it's hard to do, with a lot of technical aspects to it and programming aspects to make sure that in reality will happen. It's cost-effective, and it becomes in a way that finds the path to become standard of practice and becomes more accepting by patients and physicians. I wanna know what you guys think that it's, we know that this is gonna be the future, but how it's gonna be easy or not easy and challenging for our patients or neuromodulators to adopt these. Would it be a challenge? People will be scared in terms of patients will say that now human is not taking care of me. It's the AI and computer is making decisions for me. Or the physicians will think that the data that are feeded to them and decision already made for them. And probably even sometimes we'll be scared that creativity will go away from our field because decision is made for us already. Yeah, so I think kind of with time, I think time will kind of change people's, their sentiment towards AI. If you look at Facebook, back in 2005, when it was for only exclusive cohort and you zoom 20 years forward, now my mom's on Facebook, right? It's kind of like before she was like, get off that. Now I'm telling her, get off Facebook. The same goes for AI and actually did a large scale survey and the sentiment from patients and physicians is actually quite positive. And I think as time moves forward, we're gonna go from this negative, it's gonna take over the world. I mean, if you look at Terminator, Arnold, I mean, he's not really taking over the world, right? But AI, I think we're way too far away from that point. And I think as we had this kind of market route and like we have the big players coming in, kind of those who like don't have the reserves to kind of compete, we're gonna see a whole different scenario with how we're utilizing AI. And people are becoming more and more comfortable. They're writing emails with, they're kind of summarizing papers with it, which don't write emails because they're very quite generic, but people are adopting it more and more. And it's only a matter of time that the sentiment will become more and more positive. If we're gonna start, there are already places that there have like AI listening during patient encounters, right? Patients are becoming more accepting of it. And as Ahmed said, AI has been around for decades. It's been in our phones, it's been in computers, and it's not until the past five years with all these chips coming from NVIDIA that have really exploded the potential. And so I think it's kind of like, I'm excited to see like where we will actually go with this. Ahmed, what do you think? You think that it's, what are the real potentials in the future to see some of the great ideas that Steve and Chris were talking about will come to reality in our clinical practice? I think there is the, maybe on our side is the sort of the innovation adoption curve where you're like, as the technology matures and capability builds up and then you start getting adoption from the more excited initial adopters than early adopters, et cetera. So if the technology matures in line with that, that typically you can sort of do a successful wide adoption. Challenges, I think there's a lot of engineering challenges on can we do it reliably with privacy in mind? How do we address the cost of, there's the expense of all these chips that do the AI inference. And, but I think these engineering challenges like significant but solvable, there are still challenges on the AI sort of science and theory and what's actually capable. I mentioned sort of in the last 10 years, there's been certain inflection points and breakthroughs. There's still going to be needed additional sort of, there will be more, we'll need more of those innovations. But I think, who knows how that will proceed exactly but with the amount of attention and focus and investment, it will more likely happen than not. So yeah, very exciting future ahead of us, yeah. Looks like Elon Musk implanted the second chip for their patients. Maybe everybody will have one of those chips and collect the data and tell us what to do. And then we'll be the best AI for you, like your personal physician, personal nurse and neuromodulator all in the chip. Chris, you were saying something. No, I mean, all we all, I mean, not to kind of take the Disney out of it but all we are are just electrical signals, right? And at some point people are like, well, can you read brain waves? I mean, we kind of already are, we were before with EEGs. Now we have Neuralink that's kind of putting patterns to kind of certain thoughts and movements. And for some of whom have maybe at some point, you can translate the, like you can read the mind and also translate it to kind of electrical impulses through neuromodulation. I'm very excited. We are developing this model for the predictive abilities of pain. We've already analyzed like hundreds of thousands of data points. So hopefully we'll kind of be pushing the boundary of pain and what we can and cannot do. Excellent, very excited to see the results of your work and innovation. Steve, they were asking a question from audience. Is there any AI being used for neuromodulation training in your search for the evidence? Have you noticed anything or like Ryan, have you seen any AI technology be used? I don't know about like spinal cord simulators other than, I mean, there's a couple of proprietary things that help. I guess they are being trained by the patient signals. I don't necessarily know what's under the hood because they're obviously not released, but there's definitely stuff where there's sophisticated models being used in deep brain simulation, which I can't imagine would be any different than being applied to a spinal cord simulator down the line. And then there's a lot of research in the transition from open to closed loop systems, which essentially are taking in patient information and feeding it back. And the deep brain simulation models I've seen, they've been as sophisticated as using pretty deep neural networks as the inner workings to do the biofeedback prediction or the biomarker prediction that can then help, I guess, set baseline measurements to, I guess, adapt the model accordingly. But outside of that, I don't know much specifics in spinal cord simulation or like. Yeah, in terms of neuromodulation, so that question in terms of neuromodulation training for clinicians, I've not heard specifically for that, but there's been some work that's recently come out for use of AI in educational curriculum for regional anesthesia training. There's a couple of papers that recently got published in Rathbun Journal specifically on that. So again, my views are concordant with all the panelists here. I think AI will take over at some point in terms of acceptance. That'll come with time. I think the early adopters will certainly learn. I think the early adopters will certainly be ahead of the curve, I think. The ones that are adopting AI earlier on. A lot of the recent literature on AI has been kind of validating whether it's correct and whether it's comparable to the gold standard. And a lot of it has shown that it's not yet. But I think eventually, once we continue to train it and train various algorithms, I think it'll eventually reach to the standard that we expect it to in healthcare, so. Next slide. Wonderful discussion. I think I was, I can ask questions and like I'm sure you guys can discuss all night long for an hour or two hours, but I'm sure everybody is tired and we will have more data. Hopefully next year, we will sit together and talk about Chris and your team's work together and your results and other advancements in AI in neuromodulation. I wanna thank you all for great presentations, your input and wonderful discussions and educating everybody, including me in the audience to learn more about AI and its potentials. I wanna say goodnight to everybody. Stay tuned and stay curious. See you all together. Push the frontiers of medicine and neuromodulation for the future, better patient care. Goodnight. Goodnight, everybody.
Video Summary
The NANS Educational Committee webinar, led by Dr. Yashar Ashravi, addresses the potential and challenges of integrating AI into neuromodulation for pain management. The discussion explores how AI could transform clinical practices by identifying subpopulations within patient data using machine learning and natural language processing. Distinguished speakers include Dr. Ryan DeSouza from Mayo Clinic, who discusses ethical and technical challenges associated with AI, highlighting patient privacy, cybersecurity, and the variability of data sources. Dr. Amit Atala provides an overview of AI’s landscape in healthcare, emphasizing the role of large foundation models and their ability to be fine-tuned with domain-specific data. Dr. Stephen Voloshinek presents current evidence on AI in neuromodulation, focusing on identifying biomarkers from neural signals and using them to optimize device feedback for pain management. Dr. Christopher Robinson delves into the future, proposing that AI can predict and adapt neuromodulation treatments and even suggests potential for functional neuromodulation in spinal cord injury rehabilitation. The panel acknowledges the inevitable incorporation of AI into healthcare and stresses the need for reliable, unbiased data and collaboration among engineers, clinicians, and AI experts to advance the technology responsibly. Audience questions address the practical use of AI in medical training and spinal cord stimulation, underscoring the overall sentiment that AI, despite its current limitations, holds significant promise for revolutionizing patient care and pain management in the future.
Keywords
AI
neuromodulation
pain management
machine learning
natural language processing
patient privacy
cybersecurity
biomarkers
healthcare
spinal cord injury
×
Please select your language
1
English