Yihang Zou

I am an undergraduate student majoring in Artificial Intelligence at South China University of Technology. My primary interests include:

  • Detection and Processing of Human Physiological Signals: Focusing on the acquisition and analysis of various physiological signals, with a particular emphasis on brain-computer interfaces (BCI) and neurosignal analysis.
  • Classification of Brain Signals: Utilizing advanced machine learning techniques to classify brain signals from modalities such as EEG (electroencephalogram), fNIRS (functional near-infrared spectroscopy), and fMRI (functional magnetic resonance imaging).
  • Brain Signal Encoding and Decoding: Exploring the transformation of neural activities into diverse outputs, such as images, videos, text, and speech. This involves developing systems that can interpret and generate meaningful information from brain signals.
  • Temporal Network Models: Investigating the application of temporal network models to understand the dynamic nature of physiological signals and their underlying patterns.
  • Signal Processing Techniques: Enhancing the quality and interpretability of neural data through advanced signal processing methods.

Educational Background

South China University of Technology (SCUT), Guangzhou 2022.09 – 2026.06 B.Eng. in Artificial Intelligence 2026.09 – 2029.06 M.S. in Electronic & Information Engineering (AI track) (Prospective) Research Fields: Brain-Signal Processing | Multi-Modal Learning | Brain-Computer Interface (BCI) Prospective: LLM-based Agents & Chain-of-Thought Reasoning

Personal Skills

  • Academic Ability:

    • GPA: 3.75/4.00
  • Professional Skills:

    • Proficient in programming languages such as Python, C++, C#, R, and Matlab, with a focus on Python.
    • Experienced in using machine learning and deep learning frameworks, such as PyTorch.
    • Skilled in data analysis tools, including Pandas, NumPy, and Matplotlib.
    • Strong capabilities in data analysis, model construction, and algorithm optimization.
    • In-depth research in brain signal processing, including functional near-infrared spectroscopy (fNIRS), electroencephalogram (EEG), and functional magnetic resonance imaging (fMRI).
  • Language Skills:

    • English:
      • CET-4 (633)
      • CET-6 (581)
      • IELTS 7.0
    • Fluent in speaking, listening, reading, and writing, with the ability to conduct academic communication and read professional literature.
  • Research Abilities:

    • Participated in multiple research projects, such as “Affective Computing Based on fNIRS in Virtual Reality Environments” (core member of the National Innovation Project).
    • Achieved results in fNIRS signal classification, with a related paper accepted by the BIBM conference workshop.
    • Co-author of the patent “A Method and System for Identifying Depression Tendencies and Guiding Mindfulness” (core member, application under review).
  • Academic Competitions:

    • 2023 Baidu Paddle Cup Second Prize.
    • 2024 MCM/ICM (Mathematical Contest in Modeling) Meritorious Award(top 6%).
    • 2025 Mathorcup First Prize(top 10%), recommend national-level Third Prize.
    • 2024 Greater Bay Area Mathematical Finance Modeling Competition Second Prize.
    • 2024 14th Asia-Pacific Mathematical Contest in Modeling Third Prize.
    • 2024 International Innovation Competition for College Students, Industry Track Final Bronze Award.
    • 2024 Hongping ChangQing Fund Student Innovation Scholarship Second Prize.
    • 2025 South China University of Technology Second-Class Scholarship.
    • 2025 South China University of Technology “Excellence” Second-Class Scholarship (sponsored by Excellence Group)
  • Teamwork and Communication Skills:

    • Strong team spirit and communication skills, able to collaborate effectively with members from diverse backgrounds to advance project progress.

Research Project Experience

Research Project Experience 1: Affective Computing Based on Functional Near-Infrared Spectroscopy in Virtual Reality Environments (National Innovation Project, Hundred-Step Ladder Climbing Plan)

Project Duration: June 2023 - June 2024

Project Description: Participated in a national-level innovation project, focusing on detecting and assisting in the treatment of depression tendencies through virtual reality (VR) technology and functional near-infrared spectroscopy (fNIRS). The project designed and developed four VR scenarios, two for detecting depression tendencies and the other two for mindfulness guidance and auxiliary treatment. By recording the fNIRS signals of the cerebral cortex of subjects in specific VR environments and combining their performance in cognitive tasks, the project explored the relationship between depression tendencies and cognitive functions.

Technical Details:

  • Developed VR environments using Unity, with C# as the programming language.
  • Designed and implemented a VR scenario for a city spatial memory and navigation task. Research indicates that the spatial memory navigation ability of the hippocampus is related to the degree of depression, with subjects having depression tendencies performing worse in spatial memory tasks. This scenario indirectly assessed the subjects’ depression levels.
  • While subjects completed the VR tasks, their cerebral cortex fNIRS signals were recorded to analyze their brain activity patterns and identify neurophysiological indicators related to depression.
  • Combined psychological assessment scales to comprehensively evaluate the subjects’ emotional states and cognitive functions, verifying the effectiveness of the VR tasks.
  • Preprocessed and analyzed the collected fNIRS data to extract brain activity features related to depression, exploring new biomarkers.

Achievements:

  • The project results have applied for the patent “A Method and System for Identifying Depression Tendencies and Guiding Mindfulness,” which is currently under acceptance.
  • Successfully developed a VR system for depression detection and auxiliary treatment, providing new technological means and research directions for mental health intervention.

Research Project Experience 2: Functional Near-Infrared Spectroscopy Classification Based on Deep Learning

Project Duration: September 2024 - November 2024

Project Description: Responsible for improving the functional near-infrared spectroscopy (fNIRS) classification model. Adopted the Test Time Training (TTT) model architecture based on temporal sequences to optimize the existing Vision Transformer (ViT). By replacing the attention layers with the TTT model architecture, the classification performance of the model on temporal data was significantly enhanced. The improved model achieved State-of-the-Art (SOTA) performance on three public datasets.

Technical Details:

  • Analyzed the limitations of existing fNIRS classification models and proposed an improvement plan.
  • Adopted the Test Time Training (TTT) model architecture based on temporal sequences to optimize the existing Vision Transformer (ViT), enhancing the model’s ability to capture temporal features.
  • Replaced the attention layers with the TTT model architecture to optimize the model’s dynamic adjustment capabilities and improve classification accuracy.
  • Conducted experimental validation on three public datasets, using cross-validation and performance evaluation to demonstrate the superiority of the improved model.

Achievements:

  • Achieved State-of-the-Art (SOTA) performance on two public datasets, significantly improving the accuracy of fNIRS classification.

Research Project Experience 3: Research on Automatic Epilepsy Detection System Based on Electroencephalogram (EEG)

Project Duration: March 2025 - Present

Project Description: Dedicated to developing an automatic epilepsy detection system based on electroencephalogram (EEG) signals, utilizing deep learning technology to achieve real-time detection and early warning of epileptic seizures. The project aims to enhance the efficiency and accuracy of epilepsy seizure detection, providing timely medical intervention for patients with epilepsy.

Technical Details:

  • EEG Signal Preprocessing:

    • Studied the fundamental principles of EEG signals, including the characteristics of different frequency bands (such as alpha, beta, delta, and theta waves) and their manifestations during epileptic seizures.
    • Employed techniques such as band-pass filtering, independent component analysis (ICA) to remove ocular artifacts, and baseline correction to preprocess EEG data, ensuring the quality of the input data.
  • Feature Extraction and Model Development:

    • Utilized wavelet transform to extract time-frequency features from EEG signals, capturing local changes and dynamic characteristics within the signals.
    • Designed a model architecture based on Test Time Training (TTT), integrating convolutional neural networks (CNN) and Transformer to enhance the model’s ability to recognize epileptic seizures.
    • Developed a hybrid model that combines the strengths of CNN and Transformer to efficiently extract spatiotemporal features, further improving detection accuracy.
  • Real-Time Detection System Design:

    • Constructed a real-time monitoring system architecture capable of real-time EEG signal acquisition, preprocessing, feature extraction, and detection.
    • Implemented efficient data stream processing algorithms, such as Apache Kafka and TensorFlow Lite, to ensure low-latency system response, enabling timely detection of epileptic seizure signs and issuing early warnings.
    • Designed a multimodal alert mechanism, combining visual and auditory prompts, to ensure that patients and healthcare providers receive early warning information promptly.
  • EEG Analysis System Development:

    • Created an EEG analysis system capable of analyzing EEG signals over a period of time to automatically identify seizure episodes.
    • Implemented pre-seizure signal detection functionality, which analyzes signals from a period before the seizure to issue early warnings, providing patients with earlier intervention opportunities.

Achievements:

  • Successfully developed an automatic epilepsy detection system based on deep learning, significantly improving the accuracy and real-time performance of epileptic seizure detection.
  • Enhanced the robustness and generalization ability of the system through the optimization of wavelet transform and TTT model architecture.
  • The EEG analysis system effectively identifies seizure episodes and issues early warnings, providing an effective technical solution for real-time monitoring and early warning of epilepsy patients.

Research Project Experience 4: Research on fNIRS-based Brain Signal Decoding

Project Duration: March 2025 - Present

Project Description: Participated in the research of brain signal decoding based on functional near-infrared spectroscopy (fNIRS), aiming to decode the brain signals of subjects to reconstruct the images they are imagining. The project collects fNIRS signals and language descriptions from subjects while they view images, and uses deep learning technology to achieve multimodal feature fusion and image generation, providing support for understanding the brain’s visual information processing mechanisms and developing new brain-computer interface technologies.

Technical Details:

  • Designed experimental protocols, selecting the COCO dataset as visual stimuli, and synchronously collected fNIRS signals and language descriptions from subjects while they viewed images.
  • Preprocessed and extracted features from fNIRS brain signals to identify key features reflecting brain activity.
  • Based on the CLIP model architecture, aligned brain signal features to image features, CLIP structural features, and semantic feature spaces to achieve multimodal feature fusion.
  • Used deep learning techniques such as Variational Autoencoders (VAE) for image generation tasks, attempting to reconstruct the images imagined by subjects and optimize the generation results.
  • Analyzed the similarity between generated and original images to evaluate the decoding effect, and adjusted model parameters and architecture according to experimental results to continuously improve decoding accuracy and image generation quality.

Achievements:

  • The project is still ongoing. A fNIRS-based brain signal acquisition and processing platform has been successfully established, achieving preliminary alignment of brain signal features with multimodal features.
  • Future work will continue to optimize the model to improve decoding accuracy and strive for breakthroughs in the field of brain signal decoding.