DR. PALANICHAMY NAVEEN, PROF. DR. HAW SU CHENG, ASSOC. PROF. DR. NG KOK WHY, DR K. REVATHI, MR. SUTHENT A/L TAMILSELVAM, MS. YOGA SHRI A/P MURTHI
Description of Invention
Emotion recognition using physiological signals-such as EEG, heart rate, and skin conductance-provides objective, real-time insights into human affect, overcoming the limitations of facial or vocal cues. However, challenges like limited multi-modal datasets and signal noise complicate accurate modeling. We propose a hybrid framework that employs Convolutional Neural Networks (CNN) for robust feature extraction, enhanced by feature engineering to highlight key emotional patterns. Linear Regression (LR) is used to improve prediction consistency, while Random Forest (RF) reduces overfitting and noise. Tested with standard metrics, our CNN+LR+RF model achieves 61% accuracy, surpassing conventional single-modality approaches. These results demonstrate the value of fusing multiple physiological signals with advanced machine learning for reliable emotion recognition, supporting applications in health monitoring and adaptive interfaces.