Abstract
Background: Over the last several decades, predicting protein structures from amino acid sequences has been a core task in bioinformatics. Nowadays, the most successful methods employ multiple sequence alignments and can predict the structure with excellent performance. These predictions take advantage of all the amino acids at a given position and their frequencies. However, the effect of single amino acid substitutions in a specific protein tends to be hidden by the alignment profile. For this reason, single-sequence-based predictions attract interest even after accurate multiple-alignment methods have become available: the use of single sequences ensures that the effects of substitution are not confounded by homologous sequences.
Objective: This work aims at understanding how the single-sequence secondary structure prediction of a residue is influenced by the surrounding ones. We aim at understanding how different prediction methods use single-sequence information to predict the structure.
Methods: We compare mutual information, the coefficients of two linear models, and three deep learning networks. For the deep learning algorithms, we use the DeepLIFT analysis to assess the effect of each residue at each position in the prediction.
Results: Mutual information and linear models quantify direct effects, whereas DeepLIFT applied on deep learning networks quantifies both direct and indirect effects.
Conclusion: Our analysis shows how different network architectures use the information of single protein sequences and highlights their differences with respect to linear models. In particular, the deep learning implementations take into account context and single position information differently, with the best results obtained using the BERT architecture.
Keywords: Secondary structure prediction, single sequence, mutual information, linear model, deep learning, neuralnetwork, LSTM, BERT.
[http://dx.doi.org/10.1126/science.181.4096.223] [PMID: 4124164]
[http://dx.doi.org/10.1016/S0022-2836(05)80007-5] [PMID: 8289237]
[http://dx.doi.org/10.1038/s41586-021-03819-2] [PMID: 34265844]
[http://dx.doi.org/10.1038/43937] [PMID: 10517642]
[http://dx.doi.org/10.1073/pnas.0703700104] [PMID: 17620603]
[http://dx.doi.org/10.1006/jmbi.1998.1645] [PMID: 9545386]
[http://dx.doi.org/10.1093/bib/bbw129] [PMID: 28040746]
[http://dx.doi.org/10.1385/1-59259-368-2:71]
[http://dx.doi.org/10.1073/pnas.37.11.729] [PMID: 16578412]
[http://dx.doi.org/10.1073/pnas.37.4.205] [PMID: 14816373]
[http://dx.doi.org/10.1021/bi00699a002] [PMID: 4358940]
[http://dx.doi.org/10.1016/0022-2836(78)90297-8] [PMID: 642007]
[http://dx.doi.org/10.1016/0022-2836(87)90292-0] [PMID: 3430614]
[http://dx.doi.org/10.1016/S0076-6879(96)66034-0] [PMID: 8743705]
[http://dx.doi.org/10.1006/jsbi.2001.4336] [PMID: 11551180]
[http://dx.doi.org/10.1002/prot.10082] [PMID: 11933069]
[http://dx.doi.org/10.1016/j.csbj.2019.12.011] [PMID: 32612753]
[http://dx.doi.org/10.1002/jcc.25534] [PMID: 30368831]
[http://dx.doi.org/10.1002/jcc.26432] [PMID: 33058261]
[http://dx.doi.org/10.1038/s41587-022-01432-w] [PMID: 36192636]
[http://dx.doi.org/10.1109/TEVC.2021.3095481]
[http://dx.doi.org/10.1109/TCBB.2022.3168676]
[http://dx.doi.org/10.1109/JAS.2021.1004198]
[http://dx.doi.org/10.1093/bioinformatics/btg224] [PMID: 12912846]
[http://dx.doi.org/10.1016/S0076-6879(96)66033-9] [PMID: 8743704]
[http://dx.doi.org/10.1093/nar/gku1028] [PMID: 25352545]
[http://dx.doi.org/10.1002/bip.360221211] [PMID: 6667333]
[http://dx.doi.org/10.1073/pnas.89.22.10915] [PMID: 1438297]
[http://dx.doi.org/10.1038/srep11476] [PMID: 26098304]
[http://dx.doi.org/10.1093/bioinformatics/btx218] [PMID: 28430949]
[http://dx.doi.org/10.1103/PhysRevE.62.3096] [PMID: 11088803]
[http://dx.doi.org/10.1162/neco.1997.9.8.1735] [PMID: 9377276]
[http://dx.doi.org/10.18653/v1/N19-1423]
[http://dx.doi.org/10.1145/3458754]
[http://dx.doi.org/10.18653/v1/2020.findings-emnlp.139]
[http://dx.doi.org/10.48550/arXiv.1910.10683]