Network model with internal complexity bridges artificial intelligence and neuroscience – Nature.com
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
Advertisement
Nature Computational Science volume 4, pages 584–599 (2024)
1670
105
Metrics details
Artificial intelligence (AI) researchers currently believe that the main approach to building more general model problems is the big AI model, where existing neural networks are becoming deeper, larger and wider. We term this the big model with external complexity approach. In this work we argue that there is another approach called small model with internal complexity, which can be used to find a suitable path of incorporating rich properties into neurons to construct larger and more efficient AI models. We uncover that one has to increase the scale of the network externally to stimulate the same dynamical properties. To illustrate this, we build a Hodgkin–Huxley (HH) network with rich internal complexity, where each neuron is an HH model, and prove that the dynamical properties and performance of the HH network can be equivalent to a bigger leaky integrate-and-fire (LIF) network, where each neuron is a LIF neuron with simple internal complexity.
This is a preview of subscription content, access via your institution
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
$99.00 per year
only $8.25 per issue
Buy this article
Prices may be subject to local taxes which are calculated during checkout
The MultiMNIST dataset can be found at https://drive.google.com/open?id=1VnmCmBAVh8f_BKJg1KYx-E137gBLXbGG or in the GitHub public repository at https://github.com/Xi-L/ParetoMTL/tree/master/multiMNIST/data. The data used in the deep reinforcement learning experiment are generated from the ‘InvertedDoublePendulum-v4’ and ‘InvertedPendulum-v4’ simulation environments in the gym library (https://gym.openai.com). Source data for Figs. 3–5 can be accessed via the following Zenodo repository: https://doi.org/10.5281/zenodo.12531887 (ref. 55). Source data are provided with this paper.
All of the source code for reproducing the results in this paper is available at https://github.com/helx-20/complexity (ref. 55). We use Python v.3.8.12 (https://www.python.org/), NumPy v.1.21.2 (https://github.com/numpy/numpy), SciPy v.1.7.3 (https://www.scipy.org/), Matplotlib v.3.5.1 (https://github.com/matplotlib/matplotlib), Pandas v.1.4.1 (https://github.com/pandas-dev/pandas), Pillow v8.4.0 (https://pypi.org/project/Pillow), MATLAB R2021a software and the SAC algorithm (https://github.com/haarnoja/sac).
Ouyang, L. et al. Training language models to follow instructions with human feedback. in Advances in Neural Information Processing Systems Vol. 35 27730–27744 (NeurIPS, 2022).
Raffel, C. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 21, 5485–5551 (2020).
MathSciNet Google Scholar
Bommasani, R. et al. On the opportunities and risks of foundation models. Preprint at https://arxiv.org/abs/2108.07258 (2021).
Rosenblatt, F. The perceptron: a probabilistic model for information storage and organization in the brain. Psychol. Rev. 65, 386 (1958).
Article Google Scholar
LeCun, Y. Backpropagation applied to handwritten zip code recognition. Neural Comput. 1, 541–551 (1989).
Article Google Scholar
Krizhevsky, A., Sutskever, I. & Hinton, G. E. Imagenet classification with deep convolutional neural networks. Commun. ACM 60, 84–90 (2017).
Article Google Scholar
He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. in Proc. IEEE Conference on Computer Vision and Pattern Recognition 770–778 (IEEE, 2016).
Hopfield, J. J. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl Acad. Sci. USA 79, 2554–2558 (1982).
Article MathSciNet Google Scholar
Hochreiter, S. & Schmidhuber, J. Long short-term memory. Neural Comput. 9, 1735–1780 (1997).
Article Google Scholar
Cho, K. et al. Learning phrase representations using RNN encoder–decoder for statistical machine translation. Preprint at https://arxiv.org/abs/1406.1078 (2014).
Vaswani, A. et al. Attention is all you need. in 31st Conference on Neural Information Processing Systems (NIPS, 2017).
Devlin, J., Chang, M.-W., Lee, K. & Toutanova, K. BERT: pre-training of deep bidirectional transformers for language understanding. in Proc. 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) 4171–4186 (Association for Computational Linguistics, 2019).
Dosovitskiy, A. et al. An image is worth 16 × 16 words: transformers for image recognition at scale. in International Conference on Learning Representations (2020).
Liu, Z. et al. Swin transformer: hierarchical vision transformer using shifted windows. in Proc. IEEE/CVF International Conference on Computer Vision 10012–10022 (2021).
Li, Y. Competition-level code generation with alphacode. Science 378, 1092–1097 (2022).
Article Google Scholar
Ramesh, A., Dhariwal, P., Nichol, A., Chu, C. & Chen, M. Hierarchical text-conditional image generation with clip latents. Preprint at https://arxiv.org/abs/2204.06125 (2022).
Dauparas, J. Robust deep learning-based protein sequence design using proteinMPNN. Science 378, 49–56 (2022).
Article Google Scholar
Dayan, P. & Abbott, L. F. Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems (MIT Press, 2005).
Markram, H. The blue brain project. Nat. Rev. Neurosci. 7, 153–160 (2006).
Article Google Scholar
Izhikevich, E. M. Simple model of spiking neurons. IEEE Trans. Neural Netw. 14, 1569–1572 (2003).
Article Google Scholar
Eliasmith, C. A large-scale model of the functioning brain. Science 338, 1202–1205 (2012).
Article Google Scholar
Wilson, H. R. & Cowan, J. D. Excitatory and inhibitory interactions in localized populations of model neurons. Biophys. J. 12, 1–24 (1972).
Article Google Scholar
FitzHugh, R. Mathematical models of threshold phenomena in the nerve membrane. Bull. Math. Biophys. 17, 257–278 (1955).
Article Google Scholar
Nagumo, J., Arimoto, S. & Yoshizawa, S. An active pulse transmission line simulating nerve axon. Proc. IRE 50, 2061–2070 (1962).
Article Google Scholar
Lapicque, L. Recherches quantitatives sur l’excitation electrique des nerfs traitee comme une polarization. J. Physiol. Pathol. Générale 9, 620–635 (1907).
Google Scholar
Ermentrout, G. B. & Kopell, N. Parabolic bursting in an excitable system coupled with a slow oscillation. SIAM J. Appl. Math. 46, 233–253 (1986).
Article MathSciNet Google Scholar
Fourcaud-Trocmé, N., Hansel, D., Van Vreeswijk, C. & Brunel, N. How spike generation mechanisms determine the neuronal response to fluctuating inputs. J. Neurosci. 23, 11628–11640 (2003).
Article Google Scholar
Teeter, C. Generalized leaky integrate-and-fire models classify multiple neuron types. Nat. Commun. 9, 709 (2018).
Article Google Scholar
Hodgkin, A. L. & Huxley, A. F. A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol. 117, 500 (1952).
Article Google Scholar
Connor, J. & Stevens, C. Prediction of repetitive firing behaviour from voltage clamp data on an isolated neurone soma. J. Physiol. 213, 31–53 (1971).
Article Google Scholar
Hindmarsh, J. L. & Rose, R. A model of neuronal bursting using three coupled first order differential equations. Proc. R. Soc. Lond. B 221, 87–102 (1984).
Article Google Scholar
de Menezes, M. A. & Barabási, A.-L. Separating internal and external dynamics of complex systems. Phys. Rev. Let. 93, 068701 (2004).
Article Google Scholar
Ko, K.-I. On the computational complexity of ordinary differential equations. Information Control 58, 157–194 (1983).
Article MathSciNet Google Scholar
Waibel, A., Hanazawa, T., Hinton, G., Shikano, K. & Lang, K. J. Phoneme recognition using time-delay neural networks. IEEE Trans. Signal Proces. 37, 328–339 (1989).
Article Google Scholar
Roy, K., Jaiswal, A. & Panda, P. Towards spike-based machine intelligence with neuromorphic computing. Nature 575, 607–617 (2019).
Article Google Scholar
Pei, J. Towards artificial general intelligence with hybrid Tianjic chip architecture. Nature 572, 106–111 (2019).
Article Google Scholar
Davies, M. Loihi: a neuromorphic manycore processor with on-chip learning. IEEE Micro 38, 82–99 (2018).
Article Google Scholar
Zhou, P., Choi, D.-U., Lu, W. D., Kang, S.-M. & Eshraghian, J. K. Gradient-based neuromorphic learning on dynamical RRAM arrays. IEEE J. Emerging and Selected Topics in Circuits and Systems 12, 888–897 (2022).
Article Google Scholar
Wu, Y., Deng, L., Li, G., Zhu, J. & Shi, L. Spatio-temporal backpropagation for training high-performance spiking neural networks. Front. Neurosci. 12, 331 (2018).
Article Google Scholar
Haarnoja, T., Zhou, A., Abbeel, P. & Levine, S. Soft actor-critic: off-policy maximum entropy deep reinforcement learning with a stochastic actor. in International Conference on Machine Learning 1861–1870 (PMLR, 2018).
Tishby, N., Pereira, F. C. & Bialek, W. The information bottleneck method. Preprint at https://arxiv.org/abs/physics/0004057 (2000).
Johnson, M. H. Functional brain development in humans. Nat. Rev. Neurosci. 2, 475–483 (2001).
Article Google Scholar
Rakic, P. Evolution of the neocortex: a perspective from developmental biology. Nat. Revi. Neurosci. 10, 724–735 (2009).
Article Google Scholar
Kandel, E. R. et al. Principles of Neural Science Vol. 4 (McGraw-Hill, 2000).
Stelzer, F., Röhm, A., Vicente, R., Fischer, I. & Yanchuk, S. Deep neural networks using a single neuron: folded-in-time architecture using feedback-modulated delay loops. Nat. Commun. 12, 5164 (2021).
Article Google Scholar
Adeli, H. & Park, H. S. Optimization of space structures by neural dynamics. Neural Netw. 8, 769–781 (1995).
Article Google Scholar
Dubreuil, A., Valente, A., Beiran, M., Mastrogiuseppe, F. & Ostojic, S. The role of population structure in computations through neural dynamics. Nat. Neurosci. 25, 783–794 (2022).
Article Google Scholar
Tian, Y. et al. Theoretical foundations of studying criticality in the brain. Netw. Neurosci. 6, 1148–1185 (2022).
Gidon, A. Dendritic action potentials and computation in human layer 2/3 cortical neurons. Science 367, 83–87 (2020).
Article Google Scholar
Koch, C., Bernander, Ö. & Douglas, R. J. Do neurons have a voltage or a current threshold for action potential initiation? J. Comput. Neurosci. 2, 63–82 (1995).
Article Google Scholar
Tavanaei, A., Ghodrati, M., Kheradpisheh, S. R., Masquelier, T. & Maida, A. Deep learning in spiking neural networks. Neural Netw. 111, 47–63 (2019).
Article Google Scholar
Lin, X., Zhen, H.-L., Li, Z., Zhang, Q.-F. & Kwong, S. Pareto multi-task learning. In 33rd Conference on Neural Information Processing Systems (NeurIPS, 2019).
Molchanov, P., Tyree, S., Karras, T., Aila, T. & Kautz, J. Pruning convolutional neural networks for resource efficient inference. in International Conference on Learning Representations (2022).
Alemi, A. A., Fischer, I., Dillon, J. V. & Murphy, K. Deep variational information bottleneck. in International Conference on Learning Representations (2022).
Linxuan, H. Network model with internal complexity bridges artificial intelligence and neuroscience. Zenodo https://doi.org/10.5281/zenodo.12531887 (2024).
Download references
This work was partially supported by National Science Foundation for Distinguished Young Scholars (grant no. 62325603), National Natural Science Foundation of China (grant nos. 62236009, U22A20103, 62441606, 62332002, 62027804, 62425101, 62088102), Beijing Natural Science Foundation for Distinguished Young Scholars (grant no. JQ21015), the Hong Kong Polytechnic University under Project P0050631 and the CAAI-MindSpore Open Fund, developed on OpenI Community.
These authors contributed equally: Linxuan He, Yunhui Xu, Weihua He, Yihan Lin.
Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, Beijing, P.R. China
Linxuan He, Yunhui Xu, Bo Xu & Guoqi Li
Xingjian College, Tsinghua University, Beijing, P.R. China
Linxuan He
Department of Automation, Tsinghua University, Beijing, P.R. China
Linxuan He
Department of Physics, Tsinghua University, Beijing, P.R. China
Yunhui Xu
Department of Precision Instrument, Tsinghua University, Beijing, P.R. China
Weihua He, Yihan Lin & Wenhui Wang
Department of Psychology, Tsinghua University, Beijing, P.R. China
Yang Tian
Department of Computing, The Hong Kong Polytechnic University, Hong Kong, Hong Kong SAR
Yujie Wu
Huawei Technologies Company Limited, Shenzhen, P.R. China
Ziyang Zhang
School of Automation, Northwestern Polytechnical University, Xi’an, P.R. China
Junwei Han
School of Computer Science, Peking University, Beijing, P.R. China
Yonghong Tian
Department of Networked Intelligence, Pengcheng Laboratory, Shenzhen, P.R. China
Yonghong Tian
School of Electronic and Computer Engineering, Shenzhen Graduate School, Peking University, Beijing, P.R. China
Yonghong Tian
School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, P.R. China
Bo Xu & Guoqi Li
You can also search for this author in PubMed Google Scholar
You can also search for this author in PubMed Google Scholar
You can also search for this author in PubMed Google Scholar
You can also search for this author in PubMed Google Scholar
You can also search for this author in PubMed Google Scholar
You can also search for this author in PubMed Google Scholar
You can also search for this author in PubMed Google Scholar
You can also search for this author in PubMed Google Scholar
You can also search for this author in PubMed Google Scholar
You can also search for this author in PubMed Google Scholar
You can also search for this author in PubMed Google Scholar
You can also search for this author in PubMed Google Scholar
G.L. proposed the initial idea and supervised the whole project. L.H. led the experiments, whereas Y.X. led the theoretical derivation. Y.X. took part in writing the code concerning the computational efficiency measurement and mutual information analysis. L.H., Y.X., W.H. and Y.L. took part in modifying the neuron models. W.H. and Y.L. took part in the design of the simulation and deep learning experiments, the computational efficiency measurement and the mutual information analysis; they also wrote the code concerning the network models and deep learning experiments. Yang Tian contributed to the design of the mutual information analysis. Y.W. contributed to writing the code concerning neuron models and HH network training methods. W.W. and Z.Z. contributed to the design of the deep learning experiments. J.H., Yonghong Tian and B.X. provided guidance for this work. G.L. led the writing of this paper, with all authors assisting in writing and reviewing the paper.
Correspondence to Yonghong Tian, Bo Xu or Guoqi Li.
The authors declare no competing interests.
Nature Computational Science thanks Jason K. Eshraghian, Nicolas Fourcaud-Trocmé and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Primary Handling Editor: Ananya Rastogi, in collaboration with the Nature Computational Science team.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Proof of Theorem 1, supporting experiments of network equivalence and Supplementary Figs. 1–9 and Tables 1–10.
Data for Supplementary Fig. 1.
Data for Supplementary Fig. 3.
Data for Supplementary Fig. 8.
Data for Supplementary Fig. 9.
Source data for Fig. 3.
Source data for Fig. 4.
Source data for Fig. 5.
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Reprints and permissions
He, L., Xu, Y., He, W. et al. Network model with internal complexity bridges artificial intelligence and neuroscience. Nat Comput Sci 4, 584–599 (2024). https://doi.org/10.1038/s43588-024-00674-9
Download citation
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/s43588-024-00674-9
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
Advertisement
© 2024 Springer Nature Limited
Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.
This article was autogenerated from a news feed from CDO TIMES selected high quality news and research sources. There was no editorial review conducted beyond that by CDO TIMES staff. Need help with any of the topics in our articles? Schedule your free CDO TIMES Tech Navigator call today to stay ahead of the curve and gain insider advantages to propel your business!
Advertisement
Nature Computational Science volume 4, pages 584–599 (2024)
1670
105
Metrics details
Artificial intelligence (AI) researchers currently believe that the main approach to building more general model problems is the big AI model, where existing neural networks are becoming deeper, larger and wider. We term this the big model with external complexity approach. In this work we argue that there is another approach called small model with internal complexity, which can be used to find a suitable path of incorporating rich properties into neurons to construct larger and more efficient AI models. We uncover that one has to increase the scale of the network externally to stimulate the same dynamical properties. To illustrate this, we build a Hodgkin–Huxley (HH) network with rich internal complexity, where each neuron is an HH model, and prove that the dynamical properties and performance of the HH network can be equivalent to a bigger leaky integrate-and-fire (LIF) network, where each neuron is a LIF neuron with simple internal complexity.
This is a preview of subscription content, access via your institution
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
$99.00 per year
only $8.25 per issue
Buy this article
Prices may be subject to local taxes which are calculated during checkout
The MultiMNIST dataset can be found at https://drive.google.com/open?id=1VnmCmBAVh8f_BKJg1KYx-E137gBLXbGG or in the GitHub public repository at https://github.com/Xi-L/ParetoMTL/tree/master/multiMNIST/data. The data used in the deep reinforcement learning experiment are generated from the ‘InvertedDoublePendulum-v4’ and ‘InvertedPendulum-v4’ simulation environments in the gym library (https://gym.openai.com). Source data for Figs. 3–5 can be accessed via the following Zenodo repository: https://doi.org/10.5281/zenodo.12531887 (ref. 55). Source data are provided with this paper.
All of the source code for reproducing the results in this paper is available at https://github.com/helx-20/complexity (ref. 55). We use Python v.3.8.12 (https://www.python.org/), NumPy v.1.21.2 (https://github.com/numpy/numpy), SciPy v.1.7.3 (https://www.scipy.org/), Matplotlib v.3.5.1 (https://github.com/matplotlib/matplotlib), Pandas v.1.4.1 (https://github.com/pandas-dev/pandas), Pillow v8.4.0 (https://pypi.org/project/Pillow), MATLAB R2021a software and the SAC algorithm (https://github.com/haarnoja/sac).
Ouyang, L. et al. Training language models to follow instructions with human feedback. in Advances in Neural Information Processing Systems Vol. 35 27730–27744 (NeurIPS, 2022).
Raffel, C. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 21, 5485–5551 (2020).
MathSciNet Google Scholar
Bommasani, R. et al. On the opportunities and risks of foundation models. Preprint at https://arxiv.org/abs/2108.07258 (2021).
Rosenblatt, F. The perceptron: a probabilistic model for information storage and organization in the brain. Psychol. Rev. 65, 386 (1958).
Article Google Scholar
LeCun, Y. Backpropagation applied to handwritten zip code recognition. Neural Comput. 1, 541–551 (1989).
Article Google Scholar
Krizhevsky, A., Sutskever, I. & Hinton, G. E. Imagenet classification with deep convolutional neural networks. Commun. ACM 60, 84–90 (2017).
Article Google Scholar
He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. in Proc. IEEE Conference on Computer Vision and Pattern Recognition 770–778 (IEEE, 2016).
Hopfield, J. J. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl Acad. Sci. USA 79, 2554–2558 (1982).
Article MathSciNet Google Scholar
Hochreiter, S. & Schmidhuber, J. Long short-term memory. Neural Comput. 9, 1735–1780 (1997).
Article Google Scholar
Cho, K. et al. Learning phrase representations using RNN encoder–decoder for statistical machine translation. Preprint at https://arxiv.org/abs/1406.1078 (2014).
Vaswani, A. et al. Attention is all you need. in 31st Conference on Neural Information Processing Systems (NIPS, 2017).
Devlin, J., Chang, M.-W., Lee, K. & Toutanova, K. BERT: pre-training of deep bidirectional transformers for language understanding. in Proc. 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) 4171–4186 (Association for Computational Linguistics, 2019).
Dosovitskiy, A. et al. An image is worth 16 × 16 words: transformers for image recognition at scale. in International Conference on Learning Representations (2020).
Liu, Z. et al. Swin transformer: hierarchical vision transformer using shifted windows. in Proc. IEEE/CVF International Conference on Computer Vision 10012–10022 (2021).
Li, Y. Competition-level code generation with alphacode. Science 378, 1092–1097 (2022).
Article Google Scholar
Ramesh, A., Dhariwal, P., Nichol, A., Chu, C. & Chen, M. Hierarchical text-conditional image generation with clip latents. Preprint at https://arxiv.org/abs/2204.06125 (2022).
Dauparas, J. Robust deep learning-based protein sequence design using proteinMPNN. Science 378, 49–56 (2022).
Article Google Scholar
Dayan, P. & Abbott, L. F. Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems (MIT Press, 2005).
Markram, H. The blue brain project. Nat. Rev. Neurosci. 7, 153–160 (2006).
Article Google Scholar
Izhikevich, E. M. Simple model of spiking neurons. IEEE Trans. Neural Netw. 14, 1569–1572 (2003).
Article Google Scholar
Eliasmith, C. A large-scale model of the functioning brain. Science 338, 1202–1205 (2012).
Article Google Scholar
Wilson, H. R. & Cowan, J. D. Excitatory and inhibitory interactions in localized populations of model neurons. Biophys. J. 12, 1–24 (1972).
Article Google Scholar
FitzHugh, R. Mathematical models of threshold phenomena in the nerve membrane. Bull. Math. Biophys. 17, 257–278 (1955).
Article Google Scholar
Nagumo, J., Arimoto, S. & Yoshizawa, S. An active pulse transmission line simulating nerve axon. Proc. IRE 50, 2061–2070 (1962).
Article Google Scholar
Lapicque, L. Recherches quantitatives sur l’excitation electrique des nerfs traitee comme une polarization. J. Physiol. Pathol. Générale 9, 620–635 (1907).
Google Scholar
Ermentrout, G. B. & Kopell, N. Parabolic bursting in an excitable system coupled with a slow oscillation. SIAM J. Appl. Math. 46, 233–253 (1986).
Article MathSciNet Google Scholar
Fourcaud-Trocmé, N., Hansel, D., Van Vreeswijk, C. & Brunel, N. How spike generation mechanisms determine the neuronal response to fluctuating inputs. J. Neurosci. 23, 11628–11640 (2003).
Article Google Scholar
Teeter, C. Generalized leaky integrate-and-fire models classify multiple neuron types. Nat. Commun. 9, 709 (2018).
Article Google Scholar
Hodgkin, A. L. & Huxley, A. F. A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol. 117, 500 (1952).
Article Google Scholar
Connor, J. & Stevens, C. Prediction of repetitive firing behaviour from voltage clamp data on an isolated neurone soma. J. Physiol. 213, 31–53 (1971).
Article Google Scholar
Hindmarsh, J. L. & Rose, R. A model of neuronal bursting using three coupled first order differential equations. Proc. R. Soc. Lond. B 221, 87–102 (1984).
Article Google Scholar
de Menezes, M. A. & Barabási, A.-L. Separating internal and external dynamics of complex systems. Phys. Rev. Let. 93, 068701 (2004).
Article Google Scholar
Ko, K.-I. On the computational complexity of ordinary differential equations. Information Control 58, 157–194 (1983).
Article MathSciNet Google Scholar
Waibel, A., Hanazawa, T., Hinton, G., Shikano, K. & Lang, K. J. Phoneme recognition using time-delay neural networks. IEEE Trans. Signal Proces. 37, 328–339 (1989).
Article Google Scholar
Roy, K., Jaiswal, A. & Panda, P. Towards spike-based machine intelligence with neuromorphic computing. Nature 575, 607–617 (2019).
Article Google Scholar
Pei, J. Towards artificial general intelligence with hybrid Tianjic chip architecture. Nature 572, 106–111 (2019).
Article Google Scholar
Davies, M. Loihi: a neuromorphic manycore processor with on-chip learning. IEEE Micro 38, 82–99 (2018).
Article Google Scholar
Zhou, P., Choi, D.-U., Lu, W. D., Kang, S.-M. & Eshraghian, J. K. Gradient-based neuromorphic learning on dynamical RRAM arrays. IEEE J. Emerging and Selected Topics in Circuits and Systems 12, 888–897 (2022).
Article Google Scholar
Wu, Y., Deng, L., Li, G., Zhu, J. & Shi, L. Spatio-temporal backpropagation for training high-performance spiking neural networks. Front. Neurosci. 12, 331 (2018).
Article Google Scholar
Haarnoja, T., Zhou, A., Abbeel, P. & Levine, S. Soft actor-critic: off-policy maximum entropy deep reinforcement learning with a stochastic actor. in International Conference on Machine Learning 1861–1870 (PMLR, 2018).
Tishby, N., Pereira, F. C. & Bialek, W. The information bottleneck method. Preprint at https://arxiv.org/abs/physics/0004057 (2000).
Johnson, M. H. Functional brain development in humans. Nat. Rev. Neurosci. 2, 475–483 (2001).
Article Google Scholar
Rakic, P. Evolution of the neocortex: a perspective from developmental biology. Nat. Revi. Neurosci. 10, 724–735 (2009).
Article Google Scholar
Kandel, E. R. et al. Principles of Neural Science Vol. 4 (McGraw-Hill, 2000).
Stelzer, F., Röhm, A., Vicente, R., Fischer, I. & Yanchuk, S. Deep neural networks using a single neuron: folded-in-time architecture using feedback-modulated delay loops. Nat. Commun. 12, 5164 (2021).
Article Google Scholar
Adeli, H. & Park, H. S. Optimization of space structures by neural dynamics. Neural Netw. 8, 769–781 (1995).
Article Google Scholar
Dubreuil, A., Valente, A., Beiran, M., Mastrogiuseppe, F. & Ostojic, S. The role of population structure in computations through neural dynamics. Nat. Neurosci. 25, 783–794 (2022).
Article Google Scholar
Tian, Y. et al. Theoretical foundations of studying criticality in the brain. Netw. Neurosci. 6, 1148–1185 (2022).
Gidon, A. Dendritic action potentials and computation in human layer 2/3 cortical neurons. Science 367, 83–87 (2020).
Article Google Scholar
Koch, C., Bernander, Ö. & Douglas, R. J. Do neurons have a voltage or a current threshold for action potential initiation? J. Comput. Neurosci. 2, 63–82 (1995).
Article Google Scholar
Tavanaei, A., Ghodrati, M., Kheradpisheh, S. R., Masquelier, T. & Maida, A. Deep learning in spiking neural networks. Neural Netw. 111, 47–63 (2019).
Article Google Scholar
Lin, X., Zhen, H.-L., Li, Z., Zhang, Q.-F. & Kwong, S. Pareto multi-task learning. In 33rd Conference on Neural Information Processing Systems (NeurIPS, 2019).
Molchanov, P., Tyree, S., Karras, T., Aila, T. & Kautz, J. Pruning convolutional neural networks for resource efficient inference. in International Conference on Learning Representations (2022).
Alemi, A. A., Fischer, I., Dillon, J. V. & Murphy, K. Deep variational information bottleneck. in International Conference on Learning Representations (2022).
Linxuan, H. Network model with internal complexity bridges artificial intelligence and neuroscience. Zenodo https://doi.org/10.5281/zenodo.12531887 (2024).
Download references
This work was partially supported by National Science Foundation for Distinguished Young Scholars (grant no. 62325603), National Natural Science Foundation of China (grant nos. 62236009, U22A20103, 62441606, 62332002, 62027804, 62425101, 62088102), Beijing Natural Science Foundation for Distinguished Young Scholars (grant no. JQ21015), the Hong Kong Polytechnic University under Project P0050631 and the CAAI-MindSpore Open Fund, developed on OpenI Community.
These authors contributed equally: Linxuan He, Yunhui Xu, Weihua He, Yihan Lin.
Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, Beijing, P.R. China
Linxuan He, Yunhui Xu, Bo Xu & Guoqi Li
Xingjian College, Tsinghua University, Beijing, P.R. China
Linxuan He
Department of Automation, Tsinghua University, Beijing, P.R. China
Linxuan He
Department of Physics, Tsinghua University, Beijing, P.R. China
Yunhui Xu
Department of Precision Instrument, Tsinghua University, Beijing, P.R. China
Weihua He, Yihan Lin & Wenhui Wang
Department of Psychology, Tsinghua University, Beijing, P.R. China
Yang Tian
Department of Computing, The Hong Kong Polytechnic University, Hong Kong, Hong Kong SAR
Yujie Wu
Huawei Technologies Company Limited, Shenzhen, P.R. China
Ziyang Zhang
School of Automation, Northwestern Polytechnical University, Xi’an, P.R. China
Junwei Han
School of Computer Science, Peking University, Beijing, P.R. China
Yonghong Tian
Department of Networked Intelligence, Pengcheng Laboratory, Shenzhen, P.R. China
Yonghong Tian
School of Electronic and Computer Engineering, Shenzhen Graduate School, Peking University, Beijing, P.R. China
Yonghong Tian
School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, P.R. China
Bo Xu & Guoqi Li
You can also search for this author in PubMed Google Scholar
You can also search for this author in PubMed Google Scholar
You can also search for this author in PubMed Google Scholar
You can also search for this author in PubMed Google Scholar
You can also search for this author in PubMed Google Scholar
You can also search for this author in PubMed Google Scholar
You can also search for this author in PubMed Google Scholar
You can also search for this author in PubMed Google Scholar
You can also search for this author in PubMed Google Scholar
You can also search for this author in PubMed Google Scholar
You can also search for this author in PubMed Google Scholar
You can also search for this author in PubMed Google Scholar
G.L. proposed the initial idea and supervised the whole project. L.H. led the experiments, whereas Y.X. led the theoretical derivation. Y.X. took part in writing the code concerning the computational efficiency measurement and mutual information analysis. L.H., Y.X., W.H. and Y.L. took part in modifying the neuron models. W.H. and Y.L. took part in the design of the simulation and deep learning experiments, the computational efficiency measurement and the mutual information analysis; they also wrote the code concerning the network models and deep learning experiments. Yang Tian contributed to the design of the mutual information analysis. Y.W. contributed to writing the code concerning neuron models and HH network training methods. W.W. and Z.Z. contributed to the design of the deep learning experiments. J.H., Yonghong Tian and B.X. provided guidance for this work. G.L. led the writing of this paper, with all authors assisting in writing and reviewing the paper.
Correspondence to Yonghong Tian, Bo Xu or Guoqi Li.
The authors declare no competing interests.
Nature Computational Science thanks Jason K. Eshraghian, Nicolas Fourcaud-Trocmé and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Primary Handling Editor: Ananya Rastogi, in collaboration with the Nature Computational Science team.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Proof of Theorem 1, supporting experiments of network equivalence and Supplementary Figs. 1–9 and Tables 1–10.
Data for Supplementary Fig. 1.
Data for Supplementary Fig. 3.
Data for Supplementary Fig. 8.
Data for Supplementary Fig. 9.
Source data for Fig. 3.
Source data for Fig. 4.
Source data for Fig. 5.
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Reprints and permissions
He, L., Xu, Y., He, W. et al. Network model with internal complexity bridges artificial intelligence and neuroscience. Nat Comput Sci 4, 584–599 (2024). https://doi.org/10.1038/s43588-024-00674-9
Download citation
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/s43588-024-00674-9
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
Advertisement
© 2024 Springer Nature Limited
Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.
This article was autogenerated from a news feed from CDO TIMES selected high quality news and research sources. There was no editorial review conducted beyond that by CDO TIMES staff. Need help with any of the topics in our articles? Schedule your free CDO TIMES Tech Navigator call today to stay ahead of the curve and gain insider advantages to propel your business!

