Overview
Real-world biosignal monitoring faces the challenge of integrating multiple sensor modalities with different sampling rates, noise characteristics, and occasional dropouts. This project develops robust multimodal fusion architectures using spiking neural networks.
Multimodal Fusion
Combining EEG, EDA, PPG, and temperature signals
Robustness
Handles noisy and missing modalities gracefully
Neuromorphic-Ready
SNN architecture for efficient edge deployment
Multimodal Fusion Architecture
Figure 1: Multimodal spiking neural network architecture showing modality-specific encoders, temporal alignment layers, and late fusion for cognitive load classification.
My Role
- Architecture Design: Developed novel SNN architecture for multimodal biosignal fusion
- Implementation: Built complete training pipeline using PyTorch and snnTorch
- Ablation Studies: Systematic evaluation of modality contributions and fusion strategies
- Evaluation: Comprehensive benchmarking with reproducible metrics
Methodology
Figure 2: (Left) Rate and temporal spike encoding for different biosignal modalities. (Right) Comparison of early, intermediate, and late fusion strategies.
What's Innovative
This work introduces a novel SNN architecture specifically designed for multimodal biosignal fusion with two key innovations:
The architecture handles heterogeneous sampling rates through modality-specific temporal encoders and maintains performance even with missing or noisy channels— essential for real-world wearable deployment.
Results
Figure 3: (Left) Ablation study showing contribution of each modality. (Right) Confusion matrix for binary cognitive load classification (Low/High).
Publication
Spiking Neural Networks for Mental Workload Classification with a Multimodal Approach
2025 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB), 1575–1578 — Tainan, Taiwan