Zum Hauptinhalt springen
Nicht aus der Schweiz? Besuchen Sie lehmanns.de

Information Theoretic Learning (eBook)

Renyi's Entropy and Kernel Perspectives
eBook Download: PDF
2010 | 2010
XIV, 448 Seiten
Springer New York (Verlag)
978-1-4419-1570-2 (ISBN)

Lese- und Medienproben

Information Theoretic Learning - Jose C. Principe
Systemvoraussetzungen
213,99 inkl. MwSt
(CHF 208,95)
Der eBook-Verkauf erfolgt durch die Lehmanns Media GmbH (Berlin) zum Preis in Euro inkl. MwSt.
  • Download sofort lieferbar
  • Zahlungsarten anzeigen
This book is the first cohesive treatment of ITL algorithms to adapt linear or nonlinear learning machines both in supervised and unsupervised paradigms. It compares the performance of ITL algorithms with the second order counterparts in many applications.

José C. Principe is Distinguished Professor of Electrical and Biomedical Engineering, and BellSouth Professor at the University of Florida, and the Founder and Director of the Computational NeuroEngineering Laboratory. He is an IEEE and AIMBE Fellow, Past President of the International Neural Network Society, Past Editor-in-Chief of the IEEE Trans. on Biomedical Engineering and the Founder Editor-in-Chief of the IEEE Reviews on Biomedical Engineering. He has written an interactive electronic book on Neural Networks, a book on Brain Machine Interface Engineering and more recently a book on Kernel Adaptive Filtering, and was awarded the 2011 IEEE Neural Network Pioneer Award.

José C. Principe is Distinguished Professor of Electrical and Biomedical Engineering, and BellSouth Professor at the University of Florida, and the Founder and Director of the Computational NeuroEngineering Laboratory. He is an IEEE and AIMBE Fellow, Past President of the International Neural Network Society, Past Editor-in-Chief of the IEEE Trans. on Biomedical Engineering and the Founder Editor-in-Chief of the IEEE Reviews on Biomedical Engineering. He has written an interactive electronic book on Neural Networks, a book on Brain Machine Interface Engineering and more recently a book on Kernel Adaptive Filtering, and was awarded the 2011 IEEE Neural Network Pioneer Award.

Information Theory, Machine Learning, and Reproducing Kernel Hilbert Spaces.- Renyi’s Entropy, Divergence and Their Nonparametric Estimators.- Adaptive Information Filtering with Error Entropy and Error Correntropy Criteria.- Algorithms for Entropy and Correntropy Adaptation with Applications to Linear Systems.- Nonlinear Adaptive Filtering with MEE, MCC, and Applications.- Classification with EEC, Divergence Measures, and Error Bounds.- Clustering with ITL Principles.- Self-Organizing ITL Principles for Unsupervised Learning.- A Reproducing Kernel Hilbert Space Framework for ITL.- Correntropy for Random Variables: Properties and Applications in Statistical Inference.- Correntropy for Random Processes: Properties and Applications in Signal Processing.

"9 A Reproducing Kernel Hilbert Space Framework for ITL (p. 351-352)

9.1 Introduction


During the last decade, research on Mercer kernel-based learning algorithms has ?ourished [226,289,294]. These algorithms include, for example, the support vector machine (SVM) [63], kernel principal component analysis (KPCA) [289], and kernel Fisher discriminant analysis (KFDA) [219]. The common property of these methods is that they operate linearly, as they are explicitly expressed in terms of inner products in a transformed data space that is a reproducing kernel Hilbert space (RKHS).

Most often they correspond to nonlinear operators in the data space, and they are still relatively easy to compute using the so-called “kernel-trick”. The kernel trick is no trick at all; it refers to a property of the RKHS that enables the computation of inner products in a potentially in?nite-dimensional feature space, by a simple kernel evaluation in the input space. As we may expect, this is a computational saving step that is one of the big appeals of RKHS.

At ?rst glance one may even think that it defeats the “no free lunch theorem” (get something for nothing), but the fact of the matter is that the price of RKHS is the need for regularization and in the memory requirements as they are memory-intensive methods. Kernel-based methods (sometimes also called Mercer kernel methods) have been applied successfully in several applications, such as pattern and object recognition [194], time series prediction [225], and DNA and protein analysis [350], to name just a few.

Kernel-based methods rely on the assumption that projection to the highdimensional feature space simpli?es data handling as suggested by Cover’s theorem, who showed that the probability of shattering data (i.e., separating it exactly by a hyperplane) approaches one with a linear increase in space dimension [64].

In the case of the SVM, the assumption is that data classes become linearly separable, and therefore a separating hyperplane is su?cient for perfect classi?cation. In practice, one cannot know for sure if this assumption holds. In fact, one has to hope that the user chooses a kernel (and its free parameter) that shatters the data, and because this is improbable, the need to include the slack variable arises. The innovation of SVMs is exactly on how to train the classi?ers with the principle of structural risk minimization [323]."

Erscheint lt. Verlag 6.4.2010
Reihe/Serie Information Science and Statistics
Information Science and Statistics
Zusatzinfo XIV, 448 p.
Verlagsort New York
Sprache englisch
Themenwelt Informatik Theorie / Studium Künstliche Intelligenz / Robotik
Mathematik / Informatik Mathematik Statistik
Technik Elektrotechnik / Energietechnik
Wirtschaft Betriebswirtschaft / Management Wirtschaftsinformatik
Schlagworte Computational Intelligence • Correntropy • Information theoretic learning • Kernel • learning • machine learning • Nongaussian signal processing • Remote Sensing/Photogrammetry • RKHS and information theory • Robust adaptive filtering
ISBN-10 1-4419-1570-2 / 1441915702
ISBN-13 978-1-4419-1570-2 / 9781441915702
Informationen gemäß Produktsicherheitsverordnung (GPSR)
Haben Sie eine Frage zum Produkt?
PDFPDF (Wasserzeichen)

DRM: Digitales Wasserzeichen
Dieses eBook enthält ein digitales Wasser­zeichen und ist damit für Sie persona­lisiert. Bei einer missbräuch­lichen Weiter­gabe des eBooks an Dritte ist eine Rück­ver­folgung an die Quelle möglich.

Dateiformat: PDF (Portable Document Format)
Mit einem festen Seiten­layout eignet sich die PDF besonders für Fach­bücher mit Spalten, Tabellen und Abbild­ungen. Eine PDF kann auf fast allen Geräten ange­zeigt werden, ist aber für kleine Displays (Smart­phone, eReader) nur einge­schränkt geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen dafür einen PDF-Viewer - z.B. den Adobe Reader oder Adobe Digital Editions.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen dafür einen PDF-Viewer - z.B. die kostenlose Adobe Digital Editions-App.

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich
Die Grundlage der Digitalisierung

von Knut Hildebrand; Michael Mielke; Marcus Gebauer

eBook Download (2025)
Springer Fachmedien Wiesbaden (Verlag)
CHF 29,30
Mit Herz, Kopf & Bot zu deinem Skillset der Zukunft

von Jenny Köppe; Michel Braun

eBook Download (2025)
Lehmanns Media (Verlag)
CHF 16,60