Overview
This project builds a complete human-in-the-loop adaptive system that monitors learner cognitive state in real-time and dynamically adjusts audio playback speed to optimize learning outcomes. The system demonstrates co-adaptation between human and machine.
Closed-Loop
Continuous sensing → inference → adaptation cycle
Human-in-the-Loop
Both learner and system adapt over time
Neuromorphic
Ultra-low-power SNN inference on Dynapse chip
System Architecture
Figure 1: Complete system architecture showing multimodal sensing (EEG, PPG, EDA), Lab Streaming Layer (LSL) synchronization, SNN inference on Dynapse I, and adaptive content control.
My Role
- Concept & Design: Designed the complete human-in-the-loop co-adaptation paradigm
- Full-Stack Implementation: Built streaming acquisition, real-time feature extraction, and adaptation logic
- Hardware Integration: Configured and deployed SNN on Dynap-SE neuromorphic processor
- User Studies: Designed experiments to evaluate system efficacy and user experience
Real-Time Processing Pipeline
Figure 2: (Left) Lab Streaming Layer synchronization of EEG, PPG, and EDA with latency-aware buffering. (Right) Modality-specific update rates and feature-age auditing for robust decision making.
What's Innovative
This work introduces a novel human-in-the-loop co-adaptation paradigm where both the learner and the system adapt together over time:
Unlike traditional adaptive systems that only modify content, our approach enables bi-directional learning where system policies improve with user feedback while users adjust to system behavior—enabling personalized, long-term adaptation.
Live Dashboard
Figure 3: Live monitoring dashboard showing real-time biosignals, extracted features, cognitive load classification, and adaptation history.