Fundamentals of Computational Neuroscience
Oxford University Press Inc (Verlag)
978-0-19-851583-8 (ISBN)
- Titel erscheint in neuer Auflage
- Artikel merken
Computational neuroscience is the theoretical study of the brain to uncover the principles and mechanisms that guide the development, organization, information processing, and mental functions of the nervous system. Although not a new area, it is only recently that enough knowledge has been gathered to establish computational neuroscience as a scientific discipline in its own right. Given the complexity of the field, and its increasing importance in progressing our understanding of how the brain works, there has long been a need for an introductory text on what is often assumed to be an impenetrable topic. "Fundamentals of Computational Neuroscience" is one of the first introductory books on this topic. It introduces the theoretical foundations of neuroscience with a focus on the nature of information processing in the brain. The book covers the introduction and motivation of simplified models of neurons that are suitable for exploring information processing in large brain-like networks.
Additionally, it introduces several fundamental network architectures and discusses their relevance for information processing in the brain, giving some examples of models of higher-order cognitive functions to demonstrate the advanced insight that can be gained with such studies. Each chapter starts by introducing its topic with experimental facts and conceptual questions related to the study of brain function. An additional feature is the inclusion of simple Matlab programs that can be used to explore many of the mechanisms explained in the book. An accompanying webpage includes programs for download. The book is aimed at those within the brain and cognitive sciences, from graduate level and upwards.
1.1 WHAT IS COMPUTATIONAL NEUROSCIENCE?; 1.2 Domains in Computational Neuroscience; 1.3 What is a model?; 1.4 Emergence and adaptation; 1.5 From exploration to a theory of the brain; 1.6 Some notes on the book; 2.1 MODELLING BIOLOGICAL NEURONS; 2.2 Neurons are specialized cells; 2.3 Basic synaptic mechanisms; 2.4 The generation of action potentials: Hodgkin-Huxley equations; 2.5 Dendritic trees, the propagation of action potentials, and compartmental models; 2.6 Above and Beyond the Hodgkin-Huxley neuron: Fatigue, bursting and simplifications; 3.1 INTEGRATE-AND-FIRE NEURONS; 3.2 The spike-response model; 3.3 Spike time variability; 3.4 Noise Models for IF neurons; 4.1 ORGANIZATIONS OF NEURONAL NETWORKS; 4.2 Information transmission in networks; 4.3 Population Dynamics: modelling the average behaviour of neurons; 4.4 The sigma node; 4.5 Networks with non-classical synapses: the sigma-pi node; 5.1 HOW NEURONS TALK; 5.2 Information theory; 5.3 Information in spike trains; 5.4 Population coding and decoding; 5.5 Distributed representation; 6.1 PERCEPTION, FUNCTION REPRESNTATION, AND LOOK-UP TABLES; 6.2 The sigma node as perception; 6.3 Multi-layer mapping networks; 6.4 Learning, generalization and biological interpretations; 6.5 Self-organizing network architectures and geentic algorighms; 6.6 Mapping networks with context units; 6.7 Probabilistic mapping networks; 7.1 ASSOCIATIVE MEMORY AND HEBBIAN LEARNING; 7.2 An example of learning association; 7.3 The biochemical basis of synaptic plasticity; 7.4 The temporal structure of Hebbian plasticity: LTP and LTD; 7.5 Mathematical formulation of Hebian plasticity; 7.6 Weight distributions; 7.7 Neuronal response variability, gain control, and scaling; 7.8 Features of associators and Hebbian learning; 8.1 SHORT-TERM MEMORY AND REVERBERATING NETWORK ACTIVITY; 8.2 Long-term memory and auto-associators; 8.3 Point attractor networks: The Grossberg-Hopfield model; 8.4 The phase diagram and the Grossberg-Hopfield model; 8.5 Sparse attractor neural networks; 8.6 Chaotic networks: a dynamical systems view; 8.7 Biologically more realistic variation of attractor networks; 9.1 SPATIAL REPRESENTATIONS AND THE SENSE OF DIRECTIONS; 9.2 Learning with continuous pattern representations; 9.3 Asymptotic states and the dynamics of neural fields; 9.4 Path-integration, Hebbian trace rule, and sequence learning; 9.5 Competitive networks and self-organizing maps; 10.1 MOTOR LEARNING AND CONTROL; 10.2 The delta rule; 10.3 Generalized delta rules; 10.4 Reward learning; 111.1 SYSTEM LEVEL ANATOMY OF THE BRAIN; 11.2 Modular mapping networks; 11.3 Coupled attractor networks; 11.4 Working memory; 11.5 Attentive vision; 11.6 An interconnecting workspace hypothesis; 12.1 INTRODUCTION TO HTE MATLAB PROGRAMMING ENVIRONMENT; 12.2 Spiking neurons and numerical integration in MATLAB; 12.3 Associators and Hebbian learning; 12.4 Recurrent networks and networks dynamics; 12.5 Continuous attractor neural networks; 12.6 Error-backpropagation network; SOME USEFUL MATHEMATICS; BASIC PROBABILITY THEORY; NUMERICAL INTEGRATION; INDEX
| Erscheint lt. Verlag | 20.6.2002 |
|---|---|
| Zusatzinfo | numerous figures |
| Verlagsort | New York |
| Sprache | englisch |
| Themenwelt | Informatik ► Theorie / Studium ► Künstliche Intelligenz / Robotik |
| Naturwissenschaften ► Biologie ► Humanbiologie | |
| Naturwissenschaften ► Biologie ► Zoologie | |
| ISBN-10 | 0-19-851583-9 / 0198515839 |
| ISBN-13 | 978-0-19-851583-8 / 9780198515838 |
| Zustand | Neuware |
| Informationen gemäß Produktsicherheitsverordnung (GPSR) | |
| Haben Sie eine Frage zum Produkt? |
aus dem Bereich