Quellcode durchsuchen

Add comments and narration to tutorials

The tutorials lacked comments and narration, due to which it might
have been difficult to follow for certain users. To make it more
user-friendly, narration has been added in.
girish vor 4 Monaten
Ursprung
Commit
02d421b6f7

+ 93 - 14
tutorials/intro/exploring-with-almirah.md

@@ -1,27 +1,45 @@
 # Exploring the CALM Brain Resource with almirah
 
-## Load the dataset
+This tutorial will guide you through the process of exploring the CALM
+Brain Resource using the `almirah` Python library. We'll cover how to
+load the dataset, query layouts, and databases, and generate
+summaries.
 
+## Loading the Dataset
+
+First, we'll import the necessary library and load the dataset.
 
 ```python
 from almirah import Dataset
 ```
+
+To see the available datasets, we can use the `options` method:
+
 ```python
 Dataset.options()
 ```
 
+This should output:
+
     [<Dataset name: 'calm-brain'>]
-    
+
+Next, we load the CALM Brain dataset:
+
 ```python
 ds = Dataset(name="calm-brain")
 ds.components
 ```
 
+The components of the dataset are:
+
     [<Layout root: '/path/to/data'>,
      <Layout root: '/path/to/genome'>,
-     <Database url: 'request:calm-brain@https://www.calm-brain.ncbs.res.in/db-request/'>]
+     <Database url: 'request:calm-brain@https://calm-brain.ncbs.res.in/db-request/'>]
+
+## Querying Layouts
 
-## Quering layouts
+Layouts are parts of the dataset that represent organized data
+structures. Let's start by querying a layout:
 
 ```python
 lay = ds.components[0]
@@ -29,10 +47,13 @@ print(lay)
 len(lay.files)
 ```
 
-    <Layout root: '/path/to/data'>
+This should print the layout root and the number of files:
 
+    <Layout root: '/path/to/data'>
     42652
 
+Next, we'll explore the tags available for querying the layout:
+
 ```python
 from almirah import Tag
 
@@ -40,13 +61,19 @@ tags = Tag.options()
 len(tags)
 ```
 
+This returns the total number of tags:
+
     1589
-    
+
+We can also view the possible tag names:
+
 ```python
 tags_names_possible = {tag.name for tag in tags}
 tags_names_possible
 ```
 
+Which outputs:
+
     {'acquisition',
      'datatype',
      'direction',
@@ -59,10 +86,14 @@ tags_names_possible
      'suffix',
      'task'}
 
+Let's look at the options for a specific tag, such as `datatype`:
+
 ```python
 Tag.options(name="datatype")
 ```
 
+This returns:
+
     [<Tag datatype: 'anat'>,
      <Tag datatype: 'dwi'>,
      <Tag datatype: 'eeg'>,
@@ -72,51 +103,78 @@ Tag.options(name="datatype")
      <Tag datatype: 'genome'>,
      <Tag datatype: 'nirs'>]
 
+Now, let's query the layout for files of a specific datatype, such as EEG:
+
 ```python
 files = lay.query(datatype="eeg")
 len(files)
 ```
 
+This should give us the number of EEG files:
+
     15821
 
+We can inspect one of these files:
+
 ```python
 file = files[0]
 file.rel_path
 ```
 
+This prints the relative path of the file:
+
     'sub-D0828/ses-101/eeg/sub-D0828_ses-101_task-auditoryPCP_run-01_events.json'
 
+And the tags associated with the file:
+
 ```python
 file.tags
 ```
 
-    {'datatype': 'eeg', 'extension': '.json', 'run': '01', 'session': '101', 'subject': 'D0828', 'suffix': 'events', 'task': 'auditoryPCP'}
+Which returns:
+
+    {'datatype': 'eeg', 'extension': '.json', 'run': '01', 'session': '101',
+    'subject': 'D0828', 'suffix': 'events', 'task': 'auditoryPCP'}
 
-## Querying databases
+## Querying Databases
+
+Next, we query the databases associated with the dataset:
 
 ```python
 db = ds.components[2]
 db
 ```
 
-    <Database url: 'request:calm-brain@https://www.calm-brain.ncbs.res.in/db-request/'>
+This outputs the database information:
+
+    <Database url: 'request:calm-brain@https://calm-brain.ncbs.res.in/db-request/'>
+
+We connect to the database using credentials:
 
 ```python
 db.connect("username", "password")
+```
+
+Now, let's query a specific table, such as `presenting_disorders`, and
+display some of the data:
+
+```python
 df = db.query(table="presenting_disorders")
 df[["subject", "session", "addiction"]].head()
 ```
 
+This displays the first few rows of the queried table in a DataFrame format:
+
 <div>
 <style scoped>
     .dataframe tbody tr th:only-of-type {
-        vertical-align: middle;
+        vertical-align: middle.
     }
     .dataframe tbody tr th {
-        vertical-align: top;
+        vertical-align: top.
     }
     .dataframe thead th {
-        text-align: right;
+        text-align: right.
     }
 </style>
 <table border="1" class="dataframe">
@@ -136,7 +194,7 @@ df[["subject", "session", "addiction"]].head()
       <td>0</td>
     </tr>
     <tr>
-      <th>1</th>
+      <th>1</td>
       <td>D0019</td>
       <td>111</td>
       <td>0</td>
@@ -163,7 +221,10 @@ df[["subject", "session", "addiction"]].head()
 </table>
 </div>
 
-## Generating summaries
+## Generating Summaries
+
+We can also generate summaries based on the dataset queries. For
+example, let's find the number of subjects with anatomical data:
 
 ```python
 anat_subject_tags = ds.query(returns="subject", datatype="anat")
@@ -171,27 +232,45 @@ anat_subjects = {subject for t in anat_subject_tags for subject in t}
 len(anat_subjects)
 ```
 
+This gives us the count of subjects with anatomical data:
+
     699
 
+Similarly, we can find the number of subjects with eye-tracking data:
+
 ```python
 eyetrack_subject_tags = ds.query(returns="subject", datatype="eyetrack")
 eyetrack_subjects = {subject for t in eyetrack_subject_tags for subject in t}
 len(eyetrack_subjects)
 ```
 
+This gives us the count:
+
     1075    
 
+Lastly, let's query the total number of subjects in the database:
+
 ```python
 df = db.query(table="subjects")
 len(df)
 ```
 
+This returns the total number of subjects:
+
     2276
 
+And the number of entries in a specific table, such as the modified
+Kuppuswamy socioeconomic scale:
+
 ```python
 df = db.query(table="modified_kuppuswamy_socioeconomic_scale")
 len(df)
 ```
 
+This gives us the count:
+
     1444
 
+This concludes the tutorial. You've learned how to load the dataset,
+query its components, and generate summaries using the `almirah`
+library.

+ 40 - 7
tutorials/modality/accessing-clinical-records.md

@@ -1,34 +1,57 @@
 # Accessing clinical records using almirah
 
+This tutorial will guide you through the process of accessing clinical
+records from the CALM Brain Resource using the `almirah` Python
+library.
+
+## Connecting to the Database
+
+First, we'll import the necessary library and connect to the database.
+
 ```python
 from almirah import Database
 ```
 
+Create a database instance with the specified name, host, and backend:
+
 ```python
-db = Database(name="calm-brain", host="https://calm-brain.ncbs.res.in/db-request/" , backend="request")
+db = Database(name="calm-brain", host="https://calm-brain.ncbs.res.in/db-request/", backend="request")
 db
 ```
 
+This should output:
+
     <Database url: 'request:calm-brain@https://calm-brain.ncbs.res.in/db-request/'>
 
+## Querying the Database
+
+Next, we connect to the database using credentials:
+
 ```python
 db.connect("username", "password")
+```
+
+Now, let's query a specific table, such as `presenting_disorders`, and
+display some of the data:
+
+```python
 df = db.query(table="presenting_disorders")
 df[["subject", "session", "addiction"]].head()
 ```
 
+This displays the first few rows of the queried table in a DataFrame
+format:
+
 <div>
 <style scoped>
     .dataframe tbody tr th:only-of-type {
-        vertical-align: middle;
+        vertical-align: middle.
     }
-
     .dataframe tbody tr th {
-        vertical-align: top;
+        vertical-align: top.
     }
-
     .dataframe thead th {
-        text-align: right;
+        text-align: right.
     }
 </style>
 <table border="1" class="dataframe">
@@ -75,16 +98,26 @@ df[["subject", "session", "addiction"]].head()
 </table>
 </div>
 
+We can also get the total number of records in the table:
+
 ```python
 len(df)
 ```
 
+This returns the total number of records:
+
     2561
 
+Lastly, let's find the number of records where addiction is noted:
+
 ```python
 len(df[df["addiction"] == 1])
 ```
 
-    522
+This gives us the count of records with addiction:
 
+    522
 
+This concludes the tutorial. You've learned how to connect to a
+database, query its tables, and analyze the data using the `almirah`
+library.

+ 43 - 1
tutorials/modality/importing-eyetrack-with-mne.md

@@ -1,5 +1,13 @@
 # Importing Eye tracking data with MNE-Python
 
+This tutorial will guide you through the process of importing and
+visualizing eye-tracking data using MNE-Python and `almirah`.
+
+## Setup
+
+First, we'll import the necessary libraries and set the log level for
+MNE.
+
 ```python
 import mne
 import matplotlib.pyplot as plt
@@ -9,28 +17,48 @@ from almirah import Layout
 mne.set_log_level(False)
 ```
 
+## Loading the Data
+
+Next, we'll set up the layout to access the eye-tracking data.
+
 ```python
 lay = Layout(root="/path/to/data", specification_name="bids")
 lay
 ```
 
+This should output:
+
     <Layout root: '/path/to/data'>
 
+We can query the layout to find all eye-tracking files with the `.asc` extension:
+
 ```python
 files = lay.query(datatype="eyetrack", extension=".asc")
 len(files)
 ```
 
+This gives the total number of eye-tracking files:
+
     3632
 
+## Querying a Specific File
+
+To query a specific file, we can filter by subject, datatype, task, and extension:
+
 ```python
 file = lay.query(subject="D0019", datatype="eyetrack", task="FIX", extension=".asc")[0]
 
 print(file.rel_path)
 ```
 
+This should output the relative path of the file:
+
     sub-D0019/ses-111/eyetrack/sub-D0019_ses-111_task-FIX_run-01_eyetrack.asc
 
+## Reading and Plotting the Data
+
+We use MNE to read the eye-tracking data file and create annotations for blinks:
+
 ```python
 raw = mne.io.read_raw_eyelink(file.path, create_annotations=["blinks"])
 custom_scalings = dict(eyegaze=1e3)
@@ -39,7 +67,11 @@ plt.close()
 ```
     
 ![png](../images/eyetrack/eye-position-plot.png)
-    
+
+## Inspecting the Data
+
+We can inspect the metadata and channels in the raw object:
+
 ```python
 raw
 ```
@@ -69,17 +101,27 @@ raw
   **Duration:** 00:01:11 (HH:MM:SS)  
 </details>
 
+We can also list the channel names:
+
 ```python
 raw.ch_names
 ```
 
+This should output:
+
     ['xpos_left', 'ypos_left', 'pupil_left']
 
+And inspect the data for a specific channel, such as `xpos_left`:
+
 ```python
 raw["xpos_left"]
 ```
 
+This gives the data array and the corresponding time points:
+
     (array([[510.2, 510.1, 509.9, ..., 454.4, 454.8, 455.5]]),
      array([0.0000e+00, 1.0000e-03, 2.0000e-03, ..., 7.0556e+01, 7.0557e+01,
             7.0558e+01]))
 
+This concludes the tutorial. You've learned how to import, query, and
+visualize eye-tracking data using MNE-Python and `almirah`.

+ 47 - 6
tutorials/modality/processing-eeg-with-mne.md

@@ -1,5 +1,14 @@
 # Processing EEG with MNE-Python
 
+This tutorial demonstrates how to process EEG data using MNE-Python
+and `almirah`. The goal is to show how to interface with other
+analysis libraries and generate plots, rather than derive insights.
+
+## Setup
+
+First, we'll import the necessary libraries and set up the logging and
+warning configurations.
+
 ```python
 import mne
 import warnings
@@ -12,38 +21,62 @@ mne.set_log_level(False)
 warnings.filterwarnings('ignore')
 ```
 
+## Loading the Data
+
+Next, we'll set up the layout to access the EEG data.
+
 ```python
 lay = Layout(root="/path/to/data", specification_name="bids")
 lay
 ```
 
+This should output:
+
     <Layout root: '/path/to/data'>
     
+We can also get the layout using the specification name:
+
 ```python
 lay = Layout.get(specification_name='bids')
 ```
 
+We can query the layout to find all EEG files with the `.vhdr` extension:
+
 ```python
 files = lay.query(datatype="eeg", extension=".vhdr")
 len(files)
 ```
 
+This gives the total number of EEG files:
+
     2223
 
+## Querying Specific Files
+
+To query specific files, we can filter by subject, session, datatype, task, and extension:
+
 ```python
-vhdr_file = lay.query(subject="D0019", session="101", datatype="eeg", task="rest", extension =".vhdr")[0]
+vhdr_file = lay.query(subject="D0019", session="101", datatype="eeg", task="rest", extension=".vhdr")[0]
 eeg_file = lay.query(subject="D0019", session="101", datatype="eeg", task="rest", extension=".eeg")[0]
 montage_file = lay.query(subject="D0019", session="101", space="CapTrak", suffix="electrodes")[0]
 
 print(vhdr_file.rel_path)
 ```
 
+This should output the relative path of the file:
+
     sub-D0019/ses-101/eeg/sub-D0019_ses-101_task-rest_run-01_eeg.vhdr
 
+We can then download the EEG file:
+
 ```python
 eeg_file.download()
 ```
 
+## Setting Up the Montage and Reading Raw Data
+
+Next, we set up the montage and read the raw EEG data:
+
 ```python
 montage = mne.channels.read_custom_montage(montage_file.path)
 raw = mne.io.read_raw_brainvision(vhdr_file.path, preload=True)
@@ -76,7 +109,9 @@ raw.info
   **Lowpass:** 500.00 Hz  
 </details>
 
-# Preprocessing
+## Preprocessing
+
+We apply a band-pass filter and plot the raw data:
 
 ```python
 # Apply a band-pass filter
@@ -89,7 +124,9 @@ plt.close()
 
 ![png](../images/eeg/raw-plot.png)
     
-# Artifact removal using ICA
+## Artifact Removal using ICA
+
+We set up and apply Independent Component Analysis (ICA) to remove artifacts:
 
 ```python
 # Setup ICA
@@ -108,7 +145,9 @@ raw_clean = ica.apply(raw.copy())
     
 ![png](../images/eeg/ica-plot.png)
     
-# Power Spectral Analysis
+## Power Spectral Analysis
+
+We compute and plot the Power Spectral Density (PSD):
 
 ```python
 # Compute and plot Power Spectral Density (PSD)
@@ -117,7 +156,8 @@ plt.show()
 ```
     
 ![png](../images/eeg/psd-plot.png)
-    
+
+We also plot the topographic distribution of the spectral power:
 
 ```python
 raw_clean.compute_psd().plot_topomap(ch_type="eeg", agg_fun=np.median)
@@ -125,5 +165,6 @@ plt.close()
 ```
     
 ![png](../images/eeg/spectrum-topo-plot.png)
-    
 
+This concludes the tutorial. You've learned how to interface with
+MNE-Python to process EEG data using `almirah`.

+ 57 - 4
tutorials/modality/processing-nirs-with-mne.md

@@ -1,5 +1,13 @@
 # Reading and processing NIRS data with MNE-Python
 
+This tutorial demonstrates how to read and process NIRS data using
+MNE-Python and `almirah`.
+
+## Setup
+
+First, we'll import the necessary libraries and set up the logging and
+warning configurations.
+
 ```python
 import mne
 import warnings
@@ -13,31 +21,53 @@ mne.set_log_level(False)
 warnings.filterwarnings('ignore')
 ```
 
+## Loading the Data
+
+Next, we'll set up the layout to access the NIRS data.
+
 ```python
 lay = Layout(root="/path/to/data", specification_name="bids")
 print(lay)
 ```
 
+This should output:
+
     <Layout root: '/path/to/data'>
 
+We can query the layout to find all NIRS files with the `.snirf` extension:
+
 ```python
 files = lay.query(datatype="nirs", extension=".snirf")
 len(files)
 ```
 
+This gives the total number of NIRS files:
+
     1315
 
+## Querying a Specific File
+
+To query a specific file, we can filter by subject, task, datatype, and extension:
+
 ```python
 file = lay.query(subject="D0019", task="rest", datatype="nirs", extension=".snirf")[0]
 print(file.rel_path)
 ```
 
+This should output the relative path of the file:
+
     sub-D0019/ses-111/nirs/sub-D0019_ses-111_task-rest_run-01_nirs.snirf
 
+We can then download the NIRS file:
+
 ```python
 file.download()
 ```
 
+## Reading Raw Data
+
+Next, we read the raw NIRS data:
+
 ```python
 raw = mne.io.read_raw_snirf(file.path)
 raw.load_data()
@@ -49,12 +79,10 @@ raw.load_data()
         <tr>
             <th>Measurement date</th>
             <td>November 12, 1917  00:00:00 GMT</td>
-
         </tr>
         <tr>
             <th>Experimenter</th>
             <td>Unknown</td>
-
         </tr>
         <tr>
             <th>Participant</th>
@@ -112,7 +140,7 @@ raw.load_data()
                 </tr>
             </table>
             </details>
-	    
+
 ```python
 print(raw.info)
 ```
@@ -132,6 +160,11 @@ print(raw.info)
      subject_info: 4 items (dict)
     >
 
+## Preprocessing
+
+We pick the NIRS channels, compute the source-detector distances, and
+filter out channels with a distance greater than 0.01:
+
 ```python
 picks = mne.pick_types(raw.info, meg=False, fnirs=True)
 dists = mne.preprocessing.nirs.source_detector_distances(raw.info, picks=picks)
@@ -142,6 +175,8 @@ plt.show()
     
 ![png](../images/nirs/raw-plot.png)
     
+We convert the raw data to optical density:
+
 ```python
 raw_od = mne.preprocessing.nirs.optical_density(raw)
 raw_od.plot(n_channels=len(raw_od.ch_names), duration=500, show_scrollbars=False)
@@ -150,6 +185,8 @@ plt.show()
 
 ![png](../images/nirs/optical-density-plot.png)
     
+Next, we compute the Scalp Coupling Index (SCI) and plot the distribution:
+
 ```python
 sci = mne.preprocessing.nirs.scalp_coupling_index(raw_od)
 fig, ax = plt.subplots()
@@ -160,10 +197,14 @@ plt.show()
 
 ![png](../images/nirs/scalp-coupling-index-plot.png)
     
+We mark channels with a low SCI as bad:
+
 ```python
 raw_od.info["bads"] = list(compress(raw_od.ch_names, sci < 0.5))
 ```
 
+We convert the optical density data to hemoglobin concentration:
+
 ```python
 raw_haemo = mne.preprocessing.nirs.beer_lambert_law(raw_od, ppf=0.1)
 raw_haemo.plot(n_channels=len(raw_haemo.ch_names), duration=500, show_scrollbars=False)
@@ -172,6 +213,10 @@ plt.show()
     
 ![png](../images/nirs/raw-haemo-plot.png)
     
+## Power Spectral Analysis
+
+We compute and plot the Power Spectral Density (PSD) before and after filtering:
+
 ```python
 fig = raw_haemo.compute_psd().plot(average=True, amplitude=False)
 fig.suptitle("Before filtering", weight="bold", size="x-large")
@@ -185,10 +230,16 @@ plt.show()
         
 ![png](../images/nirs/raw-haemo-psd-after-filtering.png)
     
+## Epoching
+
+We create epochs from the continuous data:
+
 ```python
 epochs = mne.make_fixed_length_epochs(raw_haemo, duration=30, preload=False)
 ```
 
+Finally, we plot the epochs:
+
 ```python
 epochs.plot_image(combine="mean", vmin=-30, vmax=30,
                              ts_args=dict(ylim=dict(hbo=[-15, 15],
@@ -199,4 +250,6 @@ plt.show()
 ![png](../images/nirs/deoxyhemoglobin-plot.png)
         
 ![png](../images/nirs/oxyhemoglobin-plot.png)
-    
+
+This concludes the tutorial. You've learned how to read, and process
+near-infrared spectroscopy data using MNE-Python and `almirah`.

+ 51 - 3
tutorials/modality/reading-mri-with-nibabel.md

@@ -1,5 +1,12 @@
 # Reading Structural MRI with nibabel
 
+This tutorial demonstrates how to read structural MRI data using
+`nibabel` and `nilearn` in conjunction with `almirah`.
+
+## Setup
+
+First, we'll import the necessary libraries.
+
 ```python
 import nibabel as nib
 import nilearn as nil
@@ -7,41 +14,69 @@ import nilearn as nil
 from almirah import Layout
 ```
 
+## Loading the Data
+
+Next, we'll set up the layout to access the structural MRI data.
+
 ```python
 lay = Layout(root="/path/to/data", specification_name="bids")
 lay
 ```
 
+This should output:
+
     <Layout root: '/path/to/data'>
 
+We can query the layout to find all anatomical files with the `.nii.gz` extension:
+
 ```python
 files = lay.query(datatype="anat", extension=".nii.gz")
 ```
 
+## Querying a Specific File
+
+To query a specific file, we can filter by subject, datatype, suffix, and extension:
+
 ```python
 file = lay.query(subject="D0020", datatype="anat", suffix="T1w", extension=".nii.gz")[0]
 print(file.rel_path)
 ```
 
+This should output the relative path of the file:
+
     sub-D0020/ses-101/anat/sub-D0020_ses-101_T1w.nii.gz
 
+We can then download the MRI file:
+
 ```python
 file.download()
 ```
 
+This confirms the download:
+
     get(ok): sub-D0020/ses-101/anat/sub-D0020_ses-101_T1w.nii.gz (file) [from origin...]
 
+## Reading the MRI Data
+
+Next, we read the MRI data using `nibabel`:
+
 ```python
 raw = nib.load(file.path)
 type(raw)
 ```
 
+This should output:
+
     nibabel.nifti1.Nifti1Image
 
+We can inspect the header information of the MRI data:
+
 ```python
 print(raw.header)
 ```
 
+This outputs the header information:
+
     <class 'nibabel.nifti1.Nifti1Header'> object, endian='<'
     sizeof_hdr      : 348
     data_type       : b''
@@ -88,27 +123,40 @@ print(raw.header)
     intent_name     : b''
     magic           : b'n+1'
 
+We can also get the raw data as a NumPy array:
+
 ```python
 raw_data = raw.get_fdata()
 type(raw_data)
 ```
 
+This should output:
+
     numpy.ndarray
 
+And we can check the shape of the raw data:
+
 ```python
 raw_data.shape
 ```
 
+This gives the shape of the data:
+
     (192, 256, 256)
 
+## Visualizing the MRI Data
+
+Finally, we use `nilearn` to plot the MRI data:
+
 ```python
 from nilearn import plotting
 
 plotting.plot_img(raw)
 ```
 
-    <nilearn.plotting.displays._slicers.OrthoSlicer at 0x310f73b30>
-    
+This will generate the plot:
+
 ![png](../images/mri/reading-with-nibabel.png)
     
-
+This concludes the tutorial. You've learned how to read MRI data using
+`nibabel` and `nilearn`.

+ 53 - 15
tutorials/multimodal/average-eeg-across-disorders.md

@@ -1,5 +1,13 @@
 # Average EEG signal across various disorders
 
+This tutorial demonstrates how to compute and visualize the average
+EEG signal across various disorders using `mne`, `pandas`, and
+`seaborn` in conjunction with `almirah`.
+
+## Setup
+
+First, we'll import the necessary libraries and set the log level for MNE.
+
 ```python
 import mne
 import pandas as pd
@@ -10,6 +18,10 @@ from almirah import Dataset
 mne.set_log_level(False)
 ```
 
+## Loading the Dataset
+
+Next, we'll load the dataset and query the EEG files.
+
 ```python
 ds = Dataset(name="calm-brain")
 eeg_header_files = ds.query(datatype="eeg", task="rest", extension=".vhdr")
@@ -17,13 +29,21 @@ eeg_data_files = ds.query(datatype="eeg", task="rest", extension=".eeg")
 len(eeg_data_files)
 ```
 
+This should give the total number of EEG files:
+
     1120
 
+We then download the EEG data files.
+
 ```python
 for file in eeg_data_files:
     file.download()
 ```
 
+## Querying the Database
+
+We connect to the database and query the presenting disorders table.
+
 ```python
 db = ds.components[2]
 db.connect("username", "password")
@@ -31,18 +51,18 @@ df = ds.query(table="presenting_disorders")
 df[["subject", "session", "addiction"]].head()
 ```
 
+This displays the first few rows of the queried table in a DataFrame format.
+
 <div>
 <style scoped>
     .dataframe tbody tr th:only-of-type {
-        vertical-align: middle;
+        vertical-align: middle.
     }
-
     .dataframe tbody tr th {
-        vertical-align: top;
+        vertical-align: top.
     }
-
     .dataframe thead th {
-        text-align: right;
+        text-align: right.
     }
 </style>
 <table border="1" class="dataframe">
@@ -89,6 +109,10 @@ df[["subject", "session", "addiction"]].head()
 </table>
 </div>
 
+## Processing the EEG Data
+
+We define functions to compute the mean EEG signal and retrieve the disorders.
+
 ```python
 def get_eeg_mean(file):
     raw = mne.io.read_raw_brainvision(file.path)
@@ -120,6 +144,8 @@ def file_func(file):
     return mean_df.dropna()
 ```
 
+We process the EEG header files to compute the mean EEG signal and retrieve the disorders.
+
 ```python
 mean_dfs = list(map(file_func, eeg_header_files))
 mean_dfs = [df for df in mean_dfs if not df.empty]
@@ -127,18 +153,18 @@ mean_df = pd.concat(mean_dfs, sort=False)
 mean_df.head()
 ```
 
+This displays the first few rows of the combined DataFrame.
+
 <div>
 <style scoped>
     .dataframe tbody tr th:only-of-type {
-        vertical-align: middle;
+        vertical-align: middle.
     }
-
     .dataframe tbody tr th {
-        vertical-align: top;
+        vertical-align: top.
     }
-
     .dataframe thead th {
-        text-align: right;
+        text-align: right.
     }
 </style>
 <table border="1" class="dataframe">
@@ -179,22 +205,24 @@ mean_df.head()
 </table>
 </div>
 
+We compute the mean EEG signal for each disorder.
+
 ```python
 mean_df.groupby("disorder").mean()
 ```
 
+This displays the mean EEG signal for each disorder.
+
 <div>
 <style scoped>
     .dataframe tbody tr th:only-of-type {
-        vertical-align: middle;
+        vertical-align: middle.
     }
-
     .dataframe tbody tr th {
-        vertical-align: top;
+        vertical-align: top.
     }
-
     .dataframe thead th {
-        text-align: right;
+        text-align: right.
     }
 </style>
 <table border="1" class="dataframe">
@@ -237,9 +265,19 @@ mean_df.groupby("disorder").mean()
 </table>
 </div>
 
+## Visualizing the Results
+
+We visualize the distribution of the mean EEG signal for each disorder using a violin plot.
+
 ```python
 ax = sns.violinplot(data=mean_df, x="mean", hue="disorder")
 sns.move_legend(ax, "upper left", bbox_to_anchor=(1, 1))
 ```
     
+This generates the plot:
+
 ![png](../images/multimodal/average-eeg-across-disorders.png)
+
+This concludes the tutorial. You've learned how different modalities
+can be strung together to perform analysis involving multiple
+modalities.