false
Catalog
NANS Webinar 2024- Comprehensive Introduction to E ...
NANS Webinar 2024- Comprehensive Introduction to E ...
NANS Webinar 2024- Comprehensive Introduction to EEGLAB: Exploring Features, Applications, and Implementation Strategies in EEG Research- On Demand
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Welcome to NANCE Education Committee webinar series. We are delighted to have you join us today. I'm İdnur Telkes, my colleague Lisa Goodman and I will be your moderators today and facilitating our discussion and Q&A session later on. Today we have a very exciting and fundamentally important topic in research, neural signal analysis using EEG Lab Toolbox, which is one of the most commonly used open source signal processing platforms in EEG research. Our expert speaker, Dr. Arnaud Delorme, is the project scientist and chief software architect of EEG Lab. He's a neuroscientist at the Institute of Noetic Sciences, a research director at the National Center for Scientific Research in France, and also a senior research scientist at the University of California, San Diego. Dr. Delorme's primary research interest is in the analysis and modeling of human consciousness as captured by high dimensional EEG, MEG and other imaging modalities. He has also a keen interest in the scientific study of consciousness and spirituality. He pioneered experimental brain imaging analysis approaches including mental states, mind wandering during meditation, neural correlates of conscious experience and dynamical changes underlying extraordinary states of consciousness. He has published over 160 publications. He's also the author of the book, Why Our Minds Wander, Understand the Science and Learn to Focus Your Thoughts. So today, by the end of this session, you will gain insights into the core features of EEG Lab, practical applications of EEG Lab and effective strategies of using EEG Lab in research projects. So thank you again for being here. And without further ado, I will give the stage to Dr. Delorme. Thank you for the kind introduction. So can everybody see my screen? So I'm going to start with an introduction to EEG, just in case some of you don't know what EEG is. And even if you know what EEG is, if you have a clinical perspective, it's slightly different from a fundamental research perspective. So this is what EEG looks like. This is a screen capture from one of the old papers. And this paper is actually from Hans Berger. So Hans Berger here invented the EEG more than 100 years ago. This year was the anniversary in 1924. This year was the 100th anniversary of the invention of EEG. He invented the EEG because he thought he had the telepathic experience with his sister. And he wanted to understand the brain and brain waves. And so that's why he created EEG. And that's the rough, one of the first public EEG recording, 1926. You know, you really invent in 1924, but, you know, the first document is more from 1926. First spectral analysis, and that's the extracting the brain waves basically from the EEG. Very important invention in 1962, which is the computer of average transient. And I'll explain what it is in a second. And then in 1979, the first source localization using EEG. And then 1995, first application of some special algorithms we call ICA, independent competent analysis. And I'll come back to that too. And then in 2009, we got the first dry electrode system. So recording without gel. So the typical experiment, and this is very historical, these are called ERP experiments. So in the clinical realm, we record EEG, and maybe you're trying to find slip spindles or epileptic spikes. But from the research level, we're trying to, most people are doing ERP experiments. So ERP stands for event-related potential. And what you do is that you present several stimuli here. So you have two types, one in blue, one in red. And then you measure the average ERP for the blue and for the red. You also remove the baseline, and then you look at the difference between the blue and the red. So that's basically the realm of ERP event-related potential, which is what a lot of the field has been doing. And so when I mentioned the first recording of average with the computer of average transient in 1963, I think. So this was a system to study quantum mechanics. And what it does is that it was a kind of oscilloscope with memory. The first oscilloscope that had memory. And so it would take a snapshot around the stimulus of interest and then average as they came in. So in the old days, you couldn't record brainwave data. You could see it on the trace and papers, but you could not record it. But with this new machine, one of the first digital machines to, you know, do an average in real time, you could already compute ERPs. And so that's why a lot of the field, you know, they got that machine, even though it was designed for quantum physics. Originally, you could just record time series, and this started to do ERP. So looking at the trace for condition one, the trace for condition two, and then assessing if there is any significant difference between the two. This is a typical ERP experiment where you have a subject in front of a screen. This is stimuli at time zero. Then you compute the average. And then a lot of what the field has been doing is to look at these peaks right here. So the amplitude and the latency of these peaks, even though, as we'll see in a second, this has little physiological relevance. But that's what the field's been doing. So you have hundreds, thousands of papers interpreting the latency of the peaks and the amplitude of the peaks here. So some very well-known peaks are N1, P1, P2, N2, P3. And basically, you have a P when it's positive, and you have an N when it's negative. And for some weird reason, they put the negative on top and the positive on the bottom. And now it's changing these days. This is like the old fashion, and there is no real reason. Maybe one software decided to do it like this, and then people picked up on it. But these days, you more represent the plus on the positive sides. But you'll see still things like this. So P1 is positive at 100 millisecond, P2 positive at 300 millisecond, P3 positive at about 300, usually peaks more at 400. And then you have N2 negative at 200, and N1, et cetera. And so then you will have two conditions. And then you can, for example, these are rare stimuli and frequent stimuli in the visual domain. And then you have variation, and then you look at the difference between the amplitude of these peaks. And then you also look at the difference between the two. Here, you know, they have special name for the difference. Here, it's called the mismatch negativity, so MMN. And that comes from the model that the EEG data is the average plus some noise. So that's the fundamental idea between doing ERP analysis, that there is basically no signal in the EEG until you present the stimulus. And then the data becomes the average. So the ERP, single trial ERP, plus some noise. And then the idea is that the average appears in each trial, and the background noise is not perturbed in any way by the stimulus. And that's one of the first contributions of our lab to show that that's not the case. And so to do that, we used what we call the ERP image. So you have one trial right here, and we color-coded here. So you see it's blue when it's negative, red when it's positive, and then we have several. So every time it's a trial, you have a stimulus here at time zero. And then usually you would just do the average, but what we do is that we stack the different colors, and then we can also sort them. So these are the stacked different trials, and then we can sort them, for example, by reaction time. So here the black trace is the reaction time. So if we sort it by reaction time, this is how it looks. And then we can apply some smoothing across the trial space. Do you see some interesting features? So this is the ERP, so this is the average of all these trials right here. And here we see that the activity on this specific electrode, you see it's time-locked. The black trace here is the reaction time, so it's time-locked to the reaction time. It occurs after the reaction time. And then we also have some early activity here that is only present for the fast trials, not for the slow trials. So you can't see everything when you do the average. You see much more when you look at the single trials. And this is another paper that was published in our group, in which what we did is to show— so you have a stimulus right here, and then the basic ERP assumption is that basically when the stimulus comes in, it doesn't disrupt the brain activity. So what we did here, we sorted by alpha, so alpha power. Alpha is about 10 hertz. So these are ongoing brain waves. Alpha, it's always there, basically. And so we sorted by brain waves, and then we looked at the ERP. And we also took the trials where there was almost no alpha. So there's lots of trials where there's some alpha, and then by chance, some trials, there is no alpha. And we sorted in the same way, and then we looked in the ERP. And the ERP assumption is that this shouldn't change at all, the ERP. Because the ongoing brain wave here, the alpha, shouldn't affect the ERP. The ERP is the sum of the background, which is not affected by the stimulus, plus the event-related potential, the average event-related potential. And here we show this is the ERP here in blue for the lowest alpha, the ERP in brown for the highest alpha. And you can see they are very different. They should be the same, according to this ERP hypothesis. They're not. This was more than 20 years ago. But still, the field has not integrated this, that there is more to the ERP than the average. And this is what EEG Lab, so the software, and our lab is known for. It's for looking at single trials, and also methods, which is called independent component analysis, which I'm going to talk in a second. So we look in the time domain. This is a time domain. We also look in the spectral domain. So for spectral domain, here, I don't know if you see them well. You see these oscillations. We look at different frequencies. So for example, here, 4 hertz. What's the amplitude? When it's green, it means low amplitude. When it's red, it means high amplitude. And what's the amplitude with respect to the presentation of the stimulus right here? At different frequencies. For example, at 10 hertz, how does it change compared to the baseline? We see here this blue spot. It means alpha decreases here at about 600 milliseconds. So we call these plots event-related spectral perturbation. And that allows us to see the brainwave change in response to a stimulus presentation using these kind of plots. And basically, this is a diagram of the field of EEG research and ERP research. So from 1965, the computer of average transient was invented in 1963. It was created. And so you see ERP is rising. And then EEG, which was studied before. So all the alpha brainwave discovered by Hansberger in 1924. And the other ones were studied. And it starts decreasing. And then the study of brain oscillation. So this is the study of brain oscillation. It started increasing in the 90s. And it's slowly reverting in a sense. So now more people look at single trials. But still you have a majority of people who do EEG research who are going to look at just the average. But it's more advanced than it used to. And one reason people did that is that, yeah, and this is event-related spectral perturbation. And the first time ICA was applied to data, which was also in our lab. And this is when EEG lab was created here in 2003. So another problem of EEG is to try to find out where it comes from in the brain. And so here you have one source. So when you have one source, it's relatively easy. Here you have several channels. So this is the skull. This is the scalp. These are your electrodes. And you see, because of what we call volume conduction, which is the diffusion of potential in space, most of brain sources, so this is your cortex right here, most brain sources will project to multiple channels, even very far. You can have a source that's in the frontal region is going to project on the occipital channels, so the channels in the back of the head. That's very common. So once you have one source, it's not so much a problem because you can apply, you can have a model, and then you can try to find out where it's coming from. But when you have two sources, you know, they mix at the electrode level here, and it's very difficult to disentangle them. When you have multiple sources and you usually don't have two, you have more than two. So then to try to find out where it comes from is a real problem and is one of the main reasons EEG, you know, is not as precise in terms of localization of the source compared to fMRI, is that you can record the scalp, but it's very hard to know where it comes from in the brain. Not impossible, but very hard. So we'll talk briefly about some techniques in the EEG lab to do that. Usually what you would like to do is to record directly at the surface of the cortex, and that's possible. Actually, for epileptic patients, very often they're implanted, and we can record at the surface of the cortex and try to resolve them. And if you were to record at the surface of the cortex, the way you can model activity in the brain is as a small battery. So here I represented the battery, and the battery has a positive and a negative pole. So here on the positive pole, we'll have a positive potential, and negative pole, we'll have negative potential. So using this positive and negative potential, assuming we have only one source, we should be able to find out where it comes from in the brain. Where do these sources come from? Well, they don't come from, so, in the brain. They don't come from stellate cells, because the stellate cells, they have an electrical potential that just cancels out. So usually there is no net potential at surface. But by contrast, pyramidal cells, and in particular activity over the axon and dendrites, it's mostly the dendrites that generates the EEG, generate an electric field. And then when you have multiple of them, if they're not aligned, then again it cancels out. But if they're aligned and asynchronously activated, the electric field is not generated at the same moment, again, you won't see anything at the scalp. But you have millions of cells which are aligned and activated in a synchronous fashion, then you'll see something at the surface of the scalp. And the generation is relatively complex, you know, of these currents between all the different cells. There's a lot of research, it's not a solved problem, how the EEG is generated by, you know, all the different, in which cell layers is the EEG generated. All of these are still relatively open questions. So historically, you know, these two fields, and they still are very separate from each other. So you have researchers who work at the spike level, so individual neural discharge, and then you have researchers which work at the ERP level, so just the average. And they were not talking to each other. But, you know, this was part of our effort in our lab, is to try to build bridges. So to reconcile basically what you observe at the scalp level compared to what's happening actually in the brain. And one reason for why this is so hard is that also this was historically, so I said the computer of average transient, you know, just give you the average, but that's not the only issue. The only issue is the people who started using that were psychologists. And I mean, I'm an experimental psychologist myself, there's nothing wrong with psychology, but they had behavior and reaction time. And most of the experiments in behavioral psychology were performed with reaction time. How fast can you go? Sometimes you can derive very interesting information from, for example, you go 10 milliseconds faster on this type of stimulus compared to this other one. And so EEG in these days, in the 1970s and 80s, were seen as additional marker to reaction time. So you had reaction time and now you have N1P1P3. And so the issue is that psychologists which have these measures were not interested in the brain. They're just interested in additional behavioral markers. So that's also why it's so hard for the EEG field and the ERP researcher to come back to, oh, you're actually working on brain signal. These are actual brain signal here. You can do brain imaging using this signal. You just don't get like one number, like you're getting behavior. This is something very complex that's going on here to generate this P1N1, P2N2. And it's worth studying it. And so that's, you know, one of the dilemma of basically the past 30 years of EEG research. And it's slowly evolving towards, yeah, looking at this in more depth. So, again, you know, where does it come from in the brain? And so this is standard EEG cap. Here is my former PhD student, Cedric, putting a cap. You record some signal here. We have like 32 channels. then what you do is that you have, for example, spectral power. You have the amplitude of some brainwave at each of the sites. So each dot here is an electrode site, and you notice some are outside the head limits, and that's just the convention. The convention is the electrodes are low here, if they're below the midline, they're plotted outside, so it's just easier to represent. And you can see these numbers here as a 3D surface. So this is like terrain, where you would measure elevation, and the color would represent the elevation. So same thing here, the color represents how high the number is, and using that, you know, we plot these kind of plots, and we can plot it in 3D as well. And here, the color represents the amplitude of some brainwaves. So for example, say here this is alpha, when it's blue, it means alpha in this region is low, and when it's red, it means alpha in this region is high. And again, you can plot it in 2D, or you can project it on the surface of the head of the subject. So this is what we observe, you know, from the previous slide, and this is what we want. We want to know what happens in the brain. And one problem is actually relatively simple. It's called the forward problem. The forward problem is, I know where the source are in the brain, then I have a model of the brain, so it's a biophysical model, it's a 3D model with all geometries, etc., conductance at each location, and then I can just put my source, and knowing, assuming the model is correct, I put some source, and I can generate scalp activity. The problem is when you start some scalp activity, go back to the source. And this is called the inverse problem, which, as I mentioned, the main problem is that there are different sources in the brain which are active at the same time. And mathematically, there is not a single solution. There's many, many, many solutions. So you need to add constraint to the model. But we'll see what kind of constraint we can add to the model. So for doing source localization, what you need, you need a head model. So you need these surfaces and the conductance, the resistance of these different surfaces. And usually, the brain, in the old days, was modeled as a sphere, and actually worked really well. You said, oh, there's a sphere for the cortex, a sphere for the CSF, and then a sphere for the scalp. The sphere is for the skull, and the sphere is for the scalp. So four spheres, three or four spheres. These days, we use what we call binary model. So we have 3D model, so mesh, more complex mesh. So we have the exact geometry. Then in addition to that, we need the sensor locations. And then we need where the source can be. Usually, you want to put the source on the cortex. You don't want to put the source in white matter, for example. So you have a distribution of the source. So this is what you get from. So you might have a 3D scan of your electrodes. You might have the MRI of your subject, from which you can extract the mesh model. And then you have a source model. First, in each lab, you have to align them all together. And then you can do source reconstruction. You know the numbers here for each of these electrodes, and then you can try to find out where the sources are, if you have one source. Now, how do we deal with multiple sources? So I already mentioned the problem is ill-posed. So one thing you can do, you can say, oh, I only have one source. And my source is like a dipole. That's a single battery inside the brain. So I'm just going to say, all the activity come from a single battery inside the brain. Or you can also have distributed source models, where you say, well, it can come from multiple side and multiple part of the brain, but it needs to be smooth. And all the active area needs to be contiguous. So that's the LORETA model. Maybe some of you heard of that. It's low resolution source localization. So you need to add some constraints to these models. And additional constraints we add ourselves is to add ICA decomposition. So why do we add ICA decomposition? Because it solved the first half of the inverse problem. In a sense, it's going to separate the different sources. So if you have two sources which are active at the same time, this algorithm will separate the sources. So let's say this is a recording, and you have like frontal channels. So three frontal channels here, and then six occipital channels, and it's just some EEG. You apply ICA, and this is what you would get. You would get, so this is a one ICA scalp topography, and it has extracted all the blinks. So we see whenever the person blinks, for example. So this is the EOG. And then it has extracted also alpha burst in the occipital region. It has extracted frontal midline theta. So that's theta activity in frontal regions. And the way this algorithm works, it's a purely statistical algorithm. So it's called blind source separation. So it separates the source based on their statistical independence. And there's conferences about using ICA for EEG. It's also used for fMRI. It's used for a lot of different things. It works really well. And EEG is very well adapted to this kind of signal. We pioneered the use of ICA to find sources. Right now, if you ask an EEG researcher, all of them use ICA to remove artifacts. For example, blinks. Not all of them will use ICA to study the brain. So there's still some people who are not convinced. It will also isolate EMG signals. So you see, this is raw scalp topographies. So here, it's pretty hard to see where you're going to put your battery. Because there are many batteries inside the brain you have to position precisely. And you have many solutions. Versus you try to do source localization on this. It's pretty obvious. You know, it's like, oh, we're going to put a battery here. You know, and the negative is going to point upward. And we're going to generate this scalp topography. So we do source localization on this component. And that's why we say it solves the first part of the inverse problem. Because you don't have to see how you can combine the different sources. Now you have your ICA components and you just do one source per ICA component. This is an example. So, you know, one ICA component. Here's some parietal activity. This is where the dipole, you know, would be located. So this is where the source will be located. And then here is another one. And then we can also place them, you know, inside a 3D head model. So in the few minutes I have left, I'm just going to show you an overview of EEG Lab. So EEG Lab, that's a public software financed by NIH. And we have about 17,000 people on our mailing list. And the software has been downloaded about a million times since it came out in 2003. And which was a long time ago. And we are, there is 143 plugins to EEG Lab. So plugins are user contributed programs that just plug in into EEG Lab. So any users of EEG Lab can use them. And these are just some statistics. These are kind of older one up to 2002. This is a recent one here. This paper was published in 2024. This, you see the number of citations for each of the software. So EEG Lab, even though it was created more than 20 years ago, is still the most used software for processing brainwave. Brainwave data. And there's many reasons for that. One of the main reasons is that you can do group analysis. So I'll show that in a second. So you can process multiple subjects at the same time. This is a plot comparing the different software, both commercial and public. And EEG Lab has, you know, more features. And we also have a webpage comparing, it's comparing the use of EEG Lab for, compared to commercial software. So there's advantages and disadvantages to using open source software compared to commercial software. This is what we call the plugin manager. So where the 143 plugins are. And then here you can select, you can see which select, which plugins have been the most downloaded, which the ratings, they each have a rating. So you can select specific plugins. And you can create your own plugins as well. So this is the standard way that EEG Lab is processed, is processing data. So we start from the raw data. So that's the data that comes out of the amplifier after an experiment. We import the data in EEG Lab and import the channels. And believe it or not, that's almost 50% of the work. Because you, everybody has different amplifiers. Everybody has different channel location. If you've scanned your channel location, or if you have a phone, like with the LiDAR sensor right now, you can scan the electrodes in 3D. Importing the data is not a trivial step. So first you import the data, then you remove unwanted channels. Like for example, a lot of the amplifier will collect channels you don't need. Like they all think you have parts, but you don't have parts. So you have tons of additional channels you have to remove. Then you high pass filter the data, you re-reference the data. So high pass filtering removes all the slow trends. Re-referencing allows you to change the reference. So you always record your EEG data with respect to your reference. And you're allowed to re-reference offline, meaning use a different reference. And that's useful because in the literature, sometimes they use different reference. Then you clean your data right here. And then, so we have automated method to do that. I'll talk in a second. And then you run ICA, and we also have a method, automated method that's based on AI machine learning to detect the bad components automatically for you. So all of this is a pipeline. And it's always good to apply the, you know, to do it by hand. But once you've done it by hand a couple of times, this can be automated. This is the way, for example, we detect bad channels. So this is good data and bad data. So bad data on the left, good data on the right. You see the difference. These are channels here and on both axis. And this is the correlation of the activity of channels with themselves and with all the other channels. And because of volume conduction, if you take any two channels, they are very correlated up to 0.8. So all channels are very correlated with each other. And basically when it's where you have blue lines here, it means you have a channel that's not correlated with its neighbor. Here you see it's smooth. So all the channels are correlated with their neighbors. So here we don't have any bad channels. And here we can detect the bad channels. And of all the methods to detect bad channel, this seems to be the best one. And then we also have another method to detect bad portions of data based on statistics. Again, and that's a relatively complex method. I'm not going to enter the details, but it really works really well. We have demo where people are jumping and then you can remove all the artifacts. Then we have a method to remove bad components, bad ICA components, like the blanks and then the temporal muscles, which are very prominent in EEG data. So these two components we can remove. And to do that, we had people, ICA experts, were just looking at the components and they were classifying them. I think this is the brain components. I think this is eye components. I think this is the muscle components. So they classified a total of 15,000 components. There's about a hundred people. And we use this to create a machine learning algorithm to do like the human do, basically. We told the neural network, this is the labels that correspond to your components. Now you have new components, try to find the labels for them. So we just train a machine learning neural network to do that. And it works really well. You know, the performance on unseen data was about 95%. Once you've cleaned your data, so we're done with all these steps, you can extract data epochs, whether you want to compute ERPs or whether you want to compute the spectral perturbations. In the EUG lab, you can plot the ERPs. You can plot also the 2D scalp topographies and in 3D as well. You can plot which component, ICA component contribute to the ERPs to see which one you want to study. If you want to study ICA components. If you don't have epochs, if you just have raw signal, like eyes open, eyes closed. So two conditions, eyes open, eyes closed. Eyes open, eyes closed. No events, just 30 seconds of eyes open, 30 seconds of eyes closed. You can also compare the spectrum of these two conditions, which is shown here. And then you can plot the ERP image. So that's the plots I showed you at the beginning, sorting by all kinds of information. And also plot ERSP, which I mentioned at the beginning. So the variation of spectral power compared to baseline. And perform source localization, either using dipoles or using distributed models like Loretta. Once you're done with that, that's the last step, is to do group analysis. And to do group analysis, we really focus on BIDS. So BIDS is the Brain Imaging Data Structure. And then you create your study design, your pre-compute measures, and then you have two different types of statistics you can apply. So BIDS, I'm going to talk for a few minutes about BIDS. BIDS, Brain Imaging Data Structure. That's the way all the pipelines in each lab and other software, most of these days, they're structured around BIDS. So BIDS is a way to organize your data. It's not a database, it's just a way to organize your data in folders with a special file, text files, which have the metadata. And the website on which they're stored is OpenNeuro and NIMR. So these are public repositories where you have lots of BIDS data. On OpenNeuro right now, you have more than about 400 experiments. Which are stored. So originally, BIDS was created for fMRI. And then we adapted it here with some colleagues for EEG. Because the metadata is slightly different. So in a lot of hospitals or research labs, they will systematically organize all the data according to BIDS. And that simplifies when you apply pipelines for processing data. So I'm just going to show you in a few minutes, I have left how to organize your data into BIDS. So first, you have a data description. So here it's not .txt, it's .json. Which is a kind of file that's a text file, but it's a little bit structured. But it's a text file. So you can open with a text editor and you can see, okay, name, that's meditation study. Reference and link, that's the reference and link. The license, the BIDS version. So this is the first file. Then the next, you have a readme file. The readme file just contains plain text. You have a participant. I only have two here. So then you have a participant. That's another text file, that's like an Excel file, where you have the name of the participants. Then you have another description file. And then here you come to the EEG file. So you see all the, for subject one, this you have an EEG folder. And all these names right here of files, they're very standard. If you change here one character, then it's no longer BIDS. And then if you try to upload your file, you're getting an error. You're no longer BIDS. So here, for example, you have, for EEG, you have a channel file. So if it contains the name of the channels, the type of the channels, the units, then you have raw data files. So it contains binary formats or the files in a specific format. There's four formats possible. What we've done is we created this plugin in EEGLab. So you give it a bunch of raw data files, it will automatically convert them to BIDS. It will ask you a couple of questions, you know, how your data was acquired, it will convert it to BIDS. And then you can post it online in OpenNeuro. And then we OpenNeuro, so these are two websites, which are financed by NIH and partnered with each other. We maintain NIMR, which is just for EEG. And OpenNeuro contains everything. EEG, fMRI, PET, etc. So we're a partner. We're the EEG face of OpenNeuro. And on NIMR, we have access to supercomputing resource. So if you go on nimr.org, you can actually process the data that's on NIMR. You can download it to your workstation to process, or you can also use the supercomputer resources because NIMR is hosted at the San Diego supercomputer. And we have script and documentation that process automatically all the subjects in a given experiment. This is some example of group analysis. So you can do grand average ERP, you can compute statistics, you can do source localization of multiple ICA components. You can do GLM, general linear models. Which is the same way you would process the data in SPM. This is, we're using the Limo extension of EGLab. And yeah, so I apologize if this last section was a little fast. We have a bunch of YouTube video, actually our videos have been viewed almost a million times, not all of them combined. So we have a very popular channel for EEG processing. So I invite you to go to the channel if you want more information. And these are some of reference papers. But thank you for your attention. Thanks so much, Dr. Delorme, for sharing such valuable insights from EEG field with us today. We truly appreciate your expertise in the field. Lisa, would you like to open the Q&A session? Yeah, of course. Thank you. So excellent talk. I really appreciated like the background on EEG and then the more practical tools and how to embed it in the system. Maybe a first question related to the source analysis that you explained. Like, I think in our field, we are mainly confronted with the inverse problems since we don't know yet which brain areas are really important, especially lately with a lot of new stimulation paradigms that are coming up like a little bit in doubt, like which brain areas are important. Would you recommend that we first start, for example, with fMRI imaging, then determine which brain areas are important and then switch to EEG for source analysis? Or should we feel confident enough, let's say, to really start with EEG and be quite confident that our sources will be the correct ones? Yeah, so that's a very laden question because it's actually very hard to answer. The approach which consists in doing fMRI and then saying, oh, this is where the sources are has been shown to be wrong because the EEG and the fMRI don't record the same signal. So fMRI is blood flow, EEG is dendritic activity. So you can have, yeah, it could be different locations and it usually is. So fMRI has been correlated with gamma frequency in the EEG. But if you have other frequencies, yeah, it won't give you any information. So if you're looking, for example, for frontal midline beta, you won't see it at all in the fMRI signal. You only see, you know, gamma. And then gamma, you almost cannot record at the scalp. You know, it's debatable whether you can really get gamma frequencies at the scalp. So gamma frequencies are the high frequencies above 30 Hertz. And then the problem is that the skull act as a, you know, filtering machine. And then you don't see any of the high frequencies. So it's really to get, you know, gamma activity, you really need to have intracranial electrodes. You know, sometimes you can record gamma frequencies at the scalp, but, you know, it's like the quality of the signal is very low. So, yeah, the first answer is you can't really use fMRI to use a seed to find your EEG signal. And that's, these were actual, you know, people, something people did in, you know, about 15 years ago. It was like, oh, you know, let's just do fMRI. And then we're going to put our dipole there. And then now we know where they are. We're just going to fit them. It just doesn't work. They are, you know, things that work, but they're very advanced, much more advanced than what I showed you here. So then the other question is, I think it all depends on the, you know, the quality of your signal and what you're interested in. You know, if it's early sensory, perfectly fine to do EEG, very robust. You know, if it's a language like, you know, for example, there's in the ERP, something called the N400, it depends on the context, congruent versus non-congruent. You know, this kind of brain activity is much less robust than just, you know, you seeing an image, for example, this kind of congruent. So it's very hard to study and will be much harder to localize because there's a lot of things going on. You know, when it's early sensory, whether it's auditory or visual or even somatosensory, it's fairly easy to analyze and localize. But, you know, when it's higher level, when, you know, the longer you wait after the stimulus presentation, the higher, the harder it is to localize. So yeah, the response depends on, you know, what you're interested in studying. I would start, you know, with something easy and then go hard rather than the other way around, but it's possible, you know, totally possible to get, you know, source localization. They won't be precise, you know, like with, also one thing to remember is EEG is a more noisy signal than fMRI. So even though if you, you know, if you look at the lit ERP literature, it'll tell you, oh yeah, I just need 30 stimuli or, you know, between 15 and 30 stimuli to get a, you know, reliable ERP, you know, which is probably true, but you get much better signal noise ratio if you have 200, 400, you know, stimuli. And it's worth it because yeah, your stats are another level. You know, you have like, you know, lots of the ERP papers like, oh, I'm, you know, significant at P0.05, so it's good enough. But, you know, you can do real neuroscience if you have several hundreds of stimuli and then, yeah, then, you know, your significance and effect size will be 10 to the minus five, 10 to minus six. So there's no doubt. Yeah. Yeah, thank you very much for pointing out those details. I think they're really useful. People want to start doing source analysis. There was a question in the chat more specifically related to the source analysis, whether there are like specific restrictions. For example, can you do source analysis to the posterior insula medial operculum? Are there like specific regions that cannot be targeted? If I may say it like that, for example, also brainstem with scalp electrodes, can you share some information about that? Yeah, it's generally accepted that all the deep structures, thalamus, hippocampus, you won't be able to get the EEG. First, they're deep. And so, you know, the EEG decrease in like the distance to the power of four. So that's one thing. Better than the MEG. MEG decreased, you know, distance to power of six. But the main problem is not so much the distance as the organization of, you know, these structures. So these structures are, you know, organized like as, you know, like kernel, like a ball. So whenever you have activity that's at all spread, the signal is going to cancel out. So you can't see it on the scalp. So it's signal that's very hard to see on the scalp. And so, yeah, I wouldn't target these regions. Cortex, much easier. And yeah, and yeah, it's going to be hard to be credible. You know, you need to do a lot of checks if you want to say, here, this EEG comes from the thalamus. You know, nobody will believe you. It's like, yeah, unless you record simultaneously from the thalamus and, you know, like with depth electrodes and can show, oh yeah, yeah. You know, you have a strong correlation, but otherwise people won't believe you. Great. I have a question more related to noise. You know, in neuromodulation field, more specific like deep brain stimulation, spinal cord stimulation or cortical stimulation, there's always a stimulation in these artifacts. So, and ICA may or may not be helpful. Is there any like automatized algorithm in EEG lab already that we can use to denoise these signals or, you know, where we are with that? So you mean that's the noise coming from the stimulation? Electrical stimulation. Yeah, so that's similar to the kind of artifact you can get with the, sometimes you can do recordings in the scanner of the EEG. So we record simultaneously EEG and fMRI. And when you do that, the trick to removing the fMRI artifacts, they're all tricks, but, you know, the ones which have come out to work are to record the EEG at 10 kilohertz. You record your EEG at 10 kilohertz. And then if you have the exact latency of the pulse, you just take the average, you remove it. It works really well. That's the trick. So probably, you know, it's something similar, you know, that would work in your case. Even though in your case, you're also stimulating the neural tissues. So, you know, you might want to use that as a onset. Okay, you know, this is when you're stimulating and that's the, you know, that's the activity of, you know, the neural activity that's recorded after that. So, I mean, if you remove it, you're probably removing some neural activity. So you might not want to remove it, but yeah, ICA could help, you know, ICA could help isolate, okay, this signal here due to this stimulation is maximally independent of these other signals which have nothing to do with it. And then, so it would be, it could help, but you'll have, yeah, you'll have to see and do some testing. Thank you. Maybe picking up on the question of Dr. Telkes, like if you still have some stimulation artifacts, for example, from spinal cord stimulation after you did the ICA, are there like specific additional filters that you can apply or would you recommend to add additional filters after an ICA analysis? Or would you like first filter them out, the potential stimulation artifacts? And I know if you're interested in the stimulation, then maybe not, but if it's a resting state study, for example, and you don't want to do ERPs, are there any guidelines in that case? I mean, I have my own guidelines. If I was processing that kind of data, yeah, I would remove the portions of, you know, assuming you have enough data, would just remove the portions of data because then you have no doubt, you just remove them. And, you know, you don't remove one second, you remove like five seconds after the stimulus. This way, you know, you know, there's nothing left. Yeah, so that's what I would do because, you know, you can use ICA and say, oh, your ICA captures really well, but you never know. Yeah, you know, maybe there's some residual. Sometimes you don't have the choice, but if you have the choice and have data, it's just better to remove, you know, the portions of data which are affected. Yeah, sorry. I may be the one with persistent questions, but if it's a continuous stimulation, because that's what often happens in our field, that patients get continuous stimulation, like for months, years, and we do want to do an EEG experiment. In that case, how would you handle it if there is a stimulation artifact in the data? So you mean it's continuous, it's like Parkinson's patients, and how do you, what's the frequency? Like 10 Hertz or something, like continuous? It can depend. It can be 60 Hertz, 10,000 Hertz. They're like different, yeah, programming paradigms. So it can be really dependent. It's usually in burst, right? Or is it, yeah, it's- Yeah, continuous or burst. We see that we use both of them. Yeah, if it's continuous, yeah, then you want to, you know, ICA would work well for that because it's a spatial, it should be spatially constant. ICA has problem when you have like, like with ICA, you can't do what's called the ballisticocardiogram, which means like when you are in the fMRI scanner and you record EEG simultaneously, you know, just the heart, yeah, your heart rate, you know, which is, you know, electrical potential and the way it propagates and interfere with the fMRI magnet, introduce an artifact. ICA can't do that because it varies. You know, it's like a changing pattern. ICA can do spatial patterns. It can extract spatial patterns, at least the standard ICA. And in your case, the electrode is not going to move. So, you know, it's going to be a static pattern. So it should work well to remove, you know, the, to either isolate it and process it separately or to remove it. Thanks a lot. I think that's really helpful for a lot of us. Yeah, and also, you know, the, because these electrodes which are implanted, scan them sometimes also record activity. So that's what I would, you know, advise the field to do. Record, simulate and record. And this way you can record it with your scalp signal as well. I fully agree with the proposal. Ilkner, I don't know if you still have a question. I don't see any questions in the chat. So if anyone still has a question, please feel free. I see two questions on the chats. The first question is, thalamocortical dysrhythmia has been implicated in various brain states, especially in chronic pain. Oh, I was reading the question. Hang on. Do you believe that phenomenon? I believe that was the question. I mean, I would have to see the paper. If it's recorded with EEG, you know, I would be skeptical, but, you know, it's like if it did, you know, proper due diligence and, you know, sometimes it's just implanted electrodes. Yeah, I mean, it's no, you know, if you're using implanted electrodes, sure. If you're using EEG and you say it's coming from the thalamus, yeah, I would need to, you know, see how you came to that conclusion. I believe it initiated with, you know, preclinical studies, animal studies, and then they, with using EEG and source localization, they show or theorize that it's thalamocortical dysrhythmia. And the animal studies, it's always implanted. Electrodes are inside the brain. Yeah, so, and then, you know, they check. Yeah, you know, as you say, once the animal, and once the experiment is done, you dissect the brain. And, you know, obviously it's the exact position of my electrodes. But yeah, but it's different from, you know, recording at the scalp. I believe we have one more question. All right, the question is, some recent analysis utilizing current source methods has suggested subcortical activity, such as in the hippocampus using 65 EEG channels. Can we rely on surface EEG to accurately reflect subcortical activity? Um, you know, my intuitive answer is no. But, you know, I would have to see the paper. You know, it's like reading the question, you know, it says using current source density. And so current source density is a method where you take the potential and you apply a mathematical function, which is called the Laplacian, to it. And then you get currents instead of potential. But that's very surface. You know, it's actually the best, you know, it's like if you can't do a circularization, the problem is, you know, as I said, you have activity over the frontal electrodes. It could come from the occipital cortex. You're not sure. But if you do current source density, then it will cancel out. So you will know it comes, you know. So if you have current source density, then you know it's coming right from this, you know, underneath basically the skull. So this is a good method to, you know, do some basic source localization. With the potential, you're not sure, you know, where it's coming from in the brain. With current source density, you know it's coming from below the electrode. So the thalamus is too far. So if you use current source density, you know, hypothalamus is like way too deep to measure with current source density. You would look at the potential, but you would look, you know, you would use advanced method to localize, if you can, in the thalamus, hypothalamus. But then the models over there, you know, these are freely, they're very imprecise. But I don't know, you know, I would have to see the paper. Yeah, thank you very much for sharing it. It will be useful. Maybe just to kind of a general question to end the discussion about it, which strategies would you recommend for researchers aiming to integrate EEG Lab effectively into their EEG projects if they are starting something? Do you have any like useful tips to start it up? And how can they ensure that they are leveraging the software's capabilities to their fullest potential, or maybe to be in line with the most recent developments? Well, the EEG Lab website, you know, has the most recent development. There is also, you know, this paper where, you know, I review the most, I compare like four different softwares. Oops, I had to put it in the chat. Where's the chat? Everyone. Oh, okay, well, you can repost it to the users. So yeah, so, you know, look at our YouTube channel. The method hasn't changed that much, and it's always good to process the data, you know, yourself to see how it looks. And then, you know, in this paper, I also show, well, you know, it's like in standard experiment, which are lab condition, you know, EEG is better left alone. So you don't have to be like Superman to clean your data, because from the statistical perspective, if you clean your data, you're gonna remove some data, because, you know, usually remove the bad portions of data. And as you remove data, you decrease the statistical power. And actually this paper shows that it's not worth it. You know, you just high pass filter your data, and you're good to go. In most lab cases, not if you have a lot of artifact, not if you have patients, you know, which are more difficult to handle, you know, so just regular lab conditions. So yeah, that's reassuring in a way, you know, you don't need like, it's not like you're gonna bring in a researcher that's gonna clean magically the data. If you don't do any cleaning, just high pass filtering, you can get, you know, most likely results, which are superior, than if you don't process your data at all. Thank you very much also for sharing the resources. I believe that we can end the discussion with these couple of questions. We were really honored to have you in this webinar, and we hope that it's useful for everyone, and we definitely appreciate all the inputs. So thank you very much. Thanks so much. Bye-bye.
Video Summary
The NANCE Education Committee webinar explored neural signal analysis using the EEG Lab Toolbox, emphasizing its significance in EEG research. İdnur Telkes and Lisa Goodman, the moderators, introduced Dr. Arnaud Delorme, a neuroscientist and chief architect of EEG Lab, highlighting his research in human consciousness and his contributions to EEG analysis. Dr. Delorme provided a historical overview of EEG, tracing its evolution from Hans Berger's invention in 1924 to modern developments, including the introduction of independent component analysis (ICA) and the EEG Lab software in 2003.<br /><br />Dr. Delorme explained EEG's clinical and research applications, focusing on event-related potentials (ERP) and spectral analysis to understand brainwave changes. He addressed the inverse problem in EEG, which involves deducing brain activity origins from scalp recordings, emphasizing the role of ICA in separating EEG sources more clearly.<br /><br />EEG Lab’s features were discussed in detail, showing its capabilities for data import, cleaning, ICA application, and source localization, supported by various plugins and extensions. The software's primary strength lies in its open-source nature and ability to perform group analyses, evidenced by its structured data management using the Brain Imaging Data Structure (BIDS). <br /><br />During the Q&A, challenges around EEG accuracy, especially in locating deep brain structures, and strategies for dealing with stimulation artifacts were discussed, reflecting on the complexities of integrating EEG with other modalities like fMRI. The session concluded with insights on starting EEG projects, emphasizing the importance of proper data handling and statistical power without extensive cleaning.
Keywords
EEG Lab Toolbox
neural signal analysis
Dr. Arnaud Delorme
independent component analysis
event-related potentials
spectral analysis
source localization
open-source software
Brain Imaging Data Structure
EEG research
×
Please select your language
1
English