Feed-Forward Neural Networks
Vector Decomposition Analysis, Modelling and Analog Implementation
Seiten
1995
|
1995 ed.
Kluwer Academic Publishers (Verlag)
978-0-7923-9567-6 (ISBN)
Kluwer Academic Publishers (Verlag)
978-0-7923-9567-6 (ISBN)
Feed-Forward Neural Networks: Vector Decomposition Analysis, Modelling and Analog Implementation presents a novel method for the mathematical analysis of neural networks that learn according to the back-propagation algorithm. The book also discusses some other recent alternative algorithms for hardware implemented perception-like neural networks. The method permits a simple analysis of the learning behaviour of neural networks, allowing specifications for their building blocks to be readily obtained.
Starting with the derivation of a specification and ending with its hardware implementation, analog hard-wired, feed-forward neural networks with on-chip back-propagation learning are designed in their entirety. On-chip learning is necessary in circumstances where fixed weight configurations cannot be used. It is also useful for the elimination of most mis-matches and parameter tolerances that occur in hard-wired neural network chips.
Fully analog neural networks have several advantages over other implementations: low chip area, low power consumption, and high speed operation.
Feed-Forward Neural Networks is an excellent source of reference and may be used as a text for advanced courses.
Starting with the derivation of a specification and ending with its hardware implementation, analog hard-wired, feed-forward neural networks with on-chip back-propagation learning are designed in their entirety. On-chip learning is necessary in circumstances where fixed weight configurations cannot be used. It is also useful for the elimination of most mis-matches and parameter tolerances that occur in hard-wired neural network chips.
Fully analog neural networks have several advantages over other implementations: low chip area, low power consumption, and high speed operation.
Feed-Forward Neural Networks is an excellent source of reference and may be used as a text for advanced courses.
1 Introduction.- 2 The Vector Decomposition Method.- 3 Dynamics of Single Layer Nets.- 4 Unipolar Input Signals in Single-Layer Feed-Forward Neural Networks.- 5 Cross-talk in Single-Layer Feed-Forward Neural Networks.- 6 Precision Requirements for Analog Weight Adaptation Circuitry for Single-Layer Nets.- 7 Discretization of Weight Adaptations in Single-Layer Nets.- 8 Learning Behavior and Temporary Minima of Two-Layer Neural Networks.- 9 Biases and Unipolar Input signals for Two-Layer Neural Networks.- 10 Cost Functions for Two-Layer Neural Networks.- 11 Some issues for f’ (x).- 12 Feed-forward hardware.- 13 Analog weight adaptation hardware.- 14 Conclusions.- Nomenclature.
| Erscheint lt. Verlag | 31.5.1995 |
|---|---|
| Reihe/Serie | The Springer International Series in Engineering and Computer Science ; 314 |
| Zusatzinfo | XIII, 238 p. |
| Verlagsort | New York |
| Sprache | englisch |
| Maße | 155 x 235 mm |
| Themenwelt | Informatik ► Theorie / Studium ► Künstliche Intelligenz / Robotik |
| Technik ► Elektrotechnik / Energietechnik | |
| ISBN-10 | 0-7923-9567-0 / 0792395670 |
| ISBN-13 | 978-0-7923-9567-6 / 9780792395676 |
| Zustand | Neuware |
| Informationen gemäß Produktsicherheitsverordnung (GPSR) | |
| Haben Sie eine Frage zum Produkt? |
Mehr entdecken
aus dem Bereich
aus dem Bereich
Eine praxisorientierte Einführung
Buch | Softcover (2025)
Springer Vieweg (Verlag)
CHF 53,15
Künstliche Intelligenz, Macht und das größte Dilemma des 21. …
Buch | Softcover (2025)
C.H.Beck (Verlag)
CHF 25,20
Buch | Softcover (2025)
Reclam, Philipp (Verlag)
CHF 11,20