Multimodal Human-LLM Interaction for Workload-Aware Assistance
Investigating how LLM assistance changes behavior, attention, and cognitive load during coding tasks. Developing sensor-informed interaction metrics for adaptive interfaces.
My Role
Co-Lead Scientist: Concept, study design, implementation, data collection & analysis
What's Innovative
First study to combine physiological sensing with LLM interaction logs to understand real-time cognitive state during AI-assisted coding. Creates foundation for truly adaptive AI assistants that respond to user mental state.
Key Contributions
- Designed human-subject protocol combining eye tracking + physiological signals with task telemetry
- Built Python tooling for time-alignment, segmentation, and annotation of multimodal streams
- Extracted language/interaction features related to self-report and objective performance