Data Mining (eBook)
John Wiley & Sons (Verlag)
978-1-119-51607-1 (ISBN)
Presents the latest techniques for analyzing and extracting information from large amounts of data in high-dimensional data spaces
The revised and updated third edition of Data Mining contains in one volume an introduction to a systematic approach to the analysis of large data sets that integrates results from disciplines such as statistics, artificial intelligence, data bases, pattern recognition, and computer visualization. Advances in deep learning technology have opened an entire new spectrum of applications. The author-a noted expert on the topic-explains the basic concepts, models, and methodologies that have been developed in recent years.
This new edition introduces and expands on many topics, as well as providing revised sections on software tools and data mining applications. Additional changes include an updated list of references for further study, and an extended list of problems and questions that relate to each chapter.This third edition presents new and expanded information that:
• Explores big data and cloud computing
• Examines deep learning
• Includes information on convolutional neural networks (CNN)
• Offers reinforcement learning
• Contains semi-supervised learning and S3VM
• Reviews model evaluation for unbalanced data
Written for graduate students in computer science, computer engineers, and computer information systems professionals, the updated third edition of Data Mining continues to provide an essential guide to the basic principles of the technology and the most recent developments in the field.
MEHMED KANTARDZIC, PHD, is a Professor in the Department of Computer Engineering and Computer Science (CECS) at the University of Louisville, and is Director of the Data Mining Lab and CECS Graduate Programs. He is a member of IEEE, ISCA, KAS, WSEAS, IEE, and SPIE.
MEHMED KANTARDZIC, PHD, is a Professor in the Department of Computer Engineering and Computer Science (CECS) at the University of Louisville, and is Director of the Data Mining Lab and CECS Graduate Programs. He is a member of IEEE, ISCA, KAS, WSEAS, IEE, and SPIE.
1
DATA‐MINING CONCEPTS
Chapter Objectives
- Understand the need for analyses of large, complex, information‐rich data sets.
- Identify the goals and primary tasks of the data‐mining process.
- Describe the roots of data‐mining technology.
- Recognize the iterative character of a data‐mining process and specify its basic steps.
- Explain the influence of data quality on a data‐mining process.
- Establish the relation between data warehousing and data mining.
- Discuss concepts of big data and data science.
1.1 INTRODUCTION
Modern science and engineering are based on using first‐principle models to describe physical, biological, and social systems. Such an approach starts with a basic scientific model, such as Newton's laws of motion or Maxwell's equations in electromagnetism, and then builds upon them various applications in mechanical engineering or electrical engineering. In this approach, experimental data are used to verify the underlying first‐principle models and to estimate some of the parameters that are difficult or sometimes impossible to measure directly. However, in many domains the underlying first principles are unknown, or the systems under study are too complex to be mathematically formalized. With the growing use of computers, there is a great amount of data being generated by such systems. In the absence of first‐principle models, such readily available data can be used to derive models by estimating useful relationships between a system's variables (i.e., unknown input–output dependencies). Thus there is currently a paradigm shift from classical modeling and analyses based on first principles to developing models and the corresponding analyses directly from data.
We have grown accustomed gradually to the fact that there are tremendous volumes of data filling our computers, networks, and lives. Government agencies, scientific institutions, and businesses have all dedicated enormous resources to collecting and storing data. In reality, only a small amount of these data will ever be used because, in many cases, the volumes are simply too large to manage or the data structures themselves are too complicated to be analyzed effectively. How could this happen? The primary reason is that the original effort to create a data set is often focused on issues such as storage efficiency; it does not include a plan for how the data will eventually be used and analyzed.
The need to understand large, complex, information‐rich data sets is common to virtually all fields of business, science, and engineering. In the business world, corporate and customer data are becoming recognized as a strategic asset. The ability to extract useful knowledge hidden in these data and to act on that knowledge is becoming increasingly important in today's competitive world. The entire process of applying a computer‐based methodology, including new techniques, for discovering knowledge from data is called data mining.
Data mining is an iterative process within which progress is defined by discovery, through either automatic or manual methods. Data mining is most useful in an exploratory analysis scenario in which there are no predetermined notions about what will constitute an “interesting” outcome. Data mining is the search for new, valuable, and nontrivial information in large volumes of data. It is a cooperative effort of humans and computers. Best results are achieved by balancing the knowledge of human experts in describing problems and goals with the search capabilities of computers.
In practice, the two primary goals of data mining tend to be prediction and description. Prediction involves using some variables or fields in the data set to predict unknown or future values of other variables of interest. Description, on the other hand, focuses on finding patterns describing the data that can be interpreted by humans. Therefore, it is possible to put data‐mining activities into one of two categories:
- Predictive data mining, which produces the model of the system described by the given data set, or
- Descriptive data mining, which produces new, nontrivial information based on the available data set.
On the predictive end of the spectrum, the goal of data mining is to produce a model, expressed as an executable code, which can be used to perform classification, prediction, estimation, or other similar tasks. On the other, descriptive end of the spectrum, the goal is to gain an understanding of the analyzed system by uncovering patterns and relationships in large data sets. The relative importance of prediction and description for particular data‐mining applications can vary considerably. The goals of prediction and description are achieved by using data‐mining techniques, explained later in this book, for the following primary data‐mining tasks:
- Classification—Discovery of a predictive learning function that classifies a data item into one of several predefined classes.
- Regression—Discovery of a predictive learning function, which maps a data item to a real‐value prediction variable.
- Clustering—A common descriptive task in which one seeks to identify a finite set of categories or clusters to describe the data.
- Summarization—An additional descriptive task that involves methods for finding a compact description for a set (or subset) of data.
- Dependency modeling—Finding a local model that describes significant dependencies between variables or between the values of a feature in a data set or in a part of a data set.
- Change and deviation detection—Discovering the most significant changes in the data set.
The more formal approach, with graphical interpretation of data‐mining tasks for complex and large data sets and illustrative examples, is given in Chapter 4. Current introductory classifications and definitions are given here only to give the reader a feeling of the wide spectrum of problems and tasks that may be solved using data‐mining technology.
The success of a data‐mining engagement depends largely on the amount of energy, knowledge, and creativity that the designer puts into it. In essence, data mining is like solving a puzzle. The individual pieces of the puzzle are not complex structures in and of themselves. Taken as a collective whole, however, they can constitute very elaborate systems. As you try to unravel these systems, you will probably get frustrated, start forcing parts together, and generally become annoyed at the entire process; but once you know how to work with the pieces, you realize that it was not really that hard in the first place. The same analogy can be applied to data mining. In the beginning, the designers of the data‐mining process probably do not know much about the data sources; if they did, they would most likely not be interested in performing data mining. Individually, the data seem simple, complete, and explainable. But collectively, they take on a whole new appearance that is intimidating and difficult to comprehend, like the puzzle. Therefore, being an analyst and designer in a data‐mining process requires, besides thorough professional knowledge, creative thinking and a willingness to see problems in a different light.
Data mining is one of the fastest growing fields in the computer industry. Once a small interest area within computer science and statistics, it has quickly expanded into a field of its own. One of the greatest strengths of data mining is reflected in its wide range of methodologies and techniques that can be applied to a host of problem sets. Since data mining is a natural activity to be performed on large data sets, one of the largest target markets is the entire data‐warehousing, data‐mart, and decision‐support community, encompassing professionals from such industries as retail, manufacturing, telecommunications, healthcare, insurance, and transportation. In the business community, data mining can be used to discover new purchasing trends, plan investment strategies, and detect unauthorized expenditures in the accounting system. It can improve marketing campaigns, and the outcomes can be used to provide customers with more focused support and attention. Data‐mining techniques can be applied to problems of business process reengineering, in which the goal is to understand interactions and relationships among business practices and organizations.
Many law enforcement and special investigative units, whose mission is to identify fraudulent activities and discover crime trends, have also used data mining successfully. For example, these methodologies can aid analysts in the identification of critical behavior patterns, the communication interactions of narcotics organizations, the monetary transactions of money laundering and insider trading operations, the movements of serial killers, and the targeting of smugglers at border crossings. Data‐mining techniques have also been employed by people in the intelligence community who maintain many large data sources as a part of the activities relating to matters of national security. Appendix B of the book gives a brief overview of typical commercial applications of data‐mining technology today. Despite a considerable level of over‐hype and strategic misuse, data mining has not only persevered but also matured...
| Erscheint lt. Verlag | 23.10.2019 |
|---|---|
| Sprache | englisch |
| Themenwelt | Informatik ► Datenbanken ► Data Warehouse / Data Mining |
| Informatik ► Office Programme ► Outlook | |
| Schlagworte | characteristics of raw data • Computer Engineering • Computer Science • Computertechnik • data collection • Data Mining • Data Mining Algorithms • Data Mining & Knowledge Discovery • Data Mining Methods • data mining models • Data Mining Process • Data Mining u. Knowledge Discovery • data reduction</p> • Electrical & Electronics Engineering • Elektrotechnik u. Elektronik • Grid & Cloud Computing • Grid- u. Cloud-Computing • Informatik • <p>Guide to data mining • preparing data • raw data • time-dependent data • understanding data mining |
| ISBN-10 | 1-119-51607-2 / 1119516072 |
| ISBN-13 | 978-1-119-51607-1 / 9781119516071 |
| Informationen gemäß Produktsicherheitsverordnung (GPSR) | |
| Haben Sie eine Frage zum Produkt? |
Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM
Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belletristik und Sachbüchern. Der Fließtext wird dynamisch an die Display- und Schriftgröße angepasst. Auch für mobile Lesegeräte ist EPUB daher gut geeignet.
Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine
Geräteliste und zusätzliche Hinweise
Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.
aus dem Bereich