Problems with facial emotion recognition (FER) are common in people with autism spectrum disorder (ASD). This deficit is not related to how information is encoded in the neural signal, but how it is decoded, according to a recent study in Biological Psychiatry: Cognitive Neuroscience and Neuroimaging.
The researchers studied 40 adolescents with ASD and 48 without. Participants completed the Diagnostic Analysis of Nonverbal Behavior for FER while undergoing an EEG test. The researchers used a deep learning technique called a Deep Convolutional Neural Network (CNN) classifier to analyze the data.
CNN successfully predicted viewed facial emotions (eg, happy, sad) on each test. The researchers found no relation between autism and CNN classification accuracy. There was no relationship between CNN accuracy and behavior performance, although behavior was stronger in the ASD group.
The analysis indicates that facial emotion information is encoded in and extractable from individuals with and without ASD. Related impairments likely occur during encoding.
Limitations include that the ASD group only included verbally able individuals.
Deficits in FER likely stem from “aberrations later in the processing stream such as usage or deployment of facial emotion information,” the researchers concluded.
“Findings reported here suggest that future studies should focus on identifying where and when the breakdown in translation from neural encoding to behavioral response may lie, which will be critical to further inform intervention development.”
Mayor Torres JM, Clarkson T, Hauschild KM, Luhmann CC, Lerner MD, Riccardi G, Facial emotions are accurately encoded in the neural signal of those with Autism Spectrum Disorder: A deep learning approach. Biol Psychiatry Cogn Neurosci Neuroimaging. Published online April 16, 2021. doi:10.1016/j.bpsc.2021.03.015
This article originally appeared on Psychiatry Advisor