Zum Hauptinhalt springen
Dekorationsartikel gehören nicht zum Leistungsumfang.
Neural Networks and Deep Learning
A Textbook
Buch von Charu C. Aggarwal
Sprache: Englisch

64,85 €*

inkl. MwSt.

Versandkostenfrei per Post / DHL

auf Lager, Lieferzeit 4-7 Werktage

Kategorien:
Beschreibung
This book covers both classical and modern models in deep learning. The primary focus is on the theory and algorithms of deep learning. The theory and algorithms of neural networks are particularly important for understanding important concepts, so that one can understand the important design concepts of neural architectures in different applications. Why do neural networks work? When do they work better than off-the-shelf machine-learning models? When is depth useful? Why is training neural networks so hard? What are the pitfalls? The book is also rich in discussing different applications in order to give the practitioner a flavor of how neural architectures are designed for different types of problems. Deep learning methods for various data domains, such as text, images, and graphs are presented in detail. The chapters of this book span three categories:

The basics of neural networks: The backpropagation algorithm is discussed in Chapter 2.
Many traditional machine learning models can be understood as special cases of neural networks. Chapter 3 explores the connections between traditional machine learning and neural networks. Support vector machines, linear/logistic regression, singular value decomposition, matrix factorization, and recommender systems are shown to be special cases of neural networks.

Fundamentals of neural networks: A detailed discussion of training and regularization is provided in Chapters 4 and 5. Chapters 6 and 7 present radial-basis function (RBF) networks and restricted Boltzmann machines.

Advanced topics in neural networks: Chapters 8, 9, and 10 discuss recurrent neural networks, convolutional neural networks, and graph neural networks. Several advanced topics like deep reinforcement learning, attention mechanisms, transformer networks, Kohonen self-organizing maps, and generative adversarial networks are introduced in Chapters 11 and 12.

The textbook is written for graduate students and upper under graduate level students. Researchers and practitioners working within this related field will want to purchase this as well.
Where possible, an application-centric view is highlighted in order to provide an understanding of the practical uses of each class of [...] second edition is substantially reorganized and expanded with separate chapters on backpropagation and graph neural networks. Many chapters have been significantly revised over the first edition.

Greater focus is placed on modern deep learning ideas such as attention mechanisms, transformers, and pre-trained language models.
This book covers both classical and modern models in deep learning. The primary focus is on the theory and algorithms of deep learning. The theory and algorithms of neural networks are particularly important for understanding important concepts, so that one can understand the important design concepts of neural architectures in different applications. Why do neural networks work? When do they work better than off-the-shelf machine-learning models? When is depth useful? Why is training neural networks so hard? What are the pitfalls? The book is also rich in discussing different applications in order to give the practitioner a flavor of how neural architectures are designed for different types of problems. Deep learning methods for various data domains, such as text, images, and graphs are presented in detail. The chapters of this book span three categories:

The basics of neural networks: The backpropagation algorithm is discussed in Chapter 2.
Many traditional machine learning models can be understood as special cases of neural networks. Chapter 3 explores the connections between traditional machine learning and neural networks. Support vector machines, linear/logistic regression, singular value decomposition, matrix factorization, and recommender systems are shown to be special cases of neural networks.

Fundamentals of neural networks: A detailed discussion of training and regularization is provided in Chapters 4 and 5. Chapters 6 and 7 present radial-basis function (RBF) networks and restricted Boltzmann machines.

Advanced topics in neural networks: Chapters 8, 9, and 10 discuss recurrent neural networks, convolutional neural networks, and graph neural networks. Several advanced topics like deep reinforcement learning, attention mechanisms, transformer networks, Kohonen self-organizing maps, and generative adversarial networks are introduced in Chapters 11 and 12.

The textbook is written for graduate students and upper under graduate level students. Researchers and practitioners working within this related field will want to purchase this as well.
Where possible, an application-centric view is highlighted in order to provide an understanding of the practical uses of each class of [...] second edition is substantially reorganized and expanded with separate chapters on backpropagation and graph neural networks. Many chapters have been significantly revised over the first edition.

Greater focus is placed on modern deep learning ideas such as attention mechanisms, transformers, and pre-trained language models.
Über den Autor
Charu C. Aggarwal is a Distinguished Research Staff Member(DRSM) at the IBM T. J. Watson Research Center in Yorktown Heights, New York. He completed his undergraduate degree in Computer Science from the Indian Institute of Technology at Kanpur in 1993 and his Ph.D. from the Massachusetts Institute of Technology in 1996. He has worked extensively in the field of data mining. He has published more than 400 papers in refereed conferences and journals and authored over 80 patents. He is the author or editor of 20 books, including textbooks on data mining, recommender systems, and outlier analysis. Because of the commercial value of his patents, he has thrice been designated a Master Inventor at IBM. He is a recipient of an IBM Corporate Award (2003) for his work on bio-terrorist threat detection in data streams, a recipient of the IBM Outstanding Innovation Award (2008) for his scientific contributions to privacy technology, and a recipient of two IBM Outstanding Technical AchievementAwards (2009, 2015) for his work on data streams/high-dimensional data. He received the EDBT 2014 Test of Time Award for his work on condensation-based privacy-preserving data mining. He is a recipient of the IEEE ICDM Research Contributions Award (2015) and ACM SIGKDD Innovation Award, which are the two most prestigious awards for influential research contributions in the field of data mining. He is also a recipient of the W. Wallace McDowell Award, which is the highest award given solely by the IEEE Computer Society across the field of Computer Science.
He has served as the general co-chair of the IEEE Big Data Conference (2014) and as the program co-chair of the ACM CIKM Conference (2015), the IEEE ICDM Conference (2015), and the ACM KDD Conference (2016). He served as an associate editor of the IEEE Transactions on Knowledge and Data Engineering from 2004 to 2008. He is an associate editor of the IEEE Transactions on Big Data, an action editor of the DataMining and Knowledge Discovery Journal, and an associate editor of the Knowledge and Information System Journal. He has served or currently serves as the editor-in-chief of the ACM Transactions on Knowledge Discovery from Data as well as the ACM SIGKDD Explorations. He is also an editor-in-chief of ACM Books. He serves on the advisory board of the Lecture Notes on Social Networks, a publication by Springer. He has served as the vice-president of the SIAM Activity Group on Data Mining and is a member of the SIAM industry committee. He is a fellow of the SIAM, ACM, and the IEEE, for "contributions to knowledge discovery and data mining algorithms.
Zusammenfassung

Simple and intuitive discussions of neural networks and deep learning

Provides mathematical details without losing the reader in complexity

Includes exercises and examples

Discusses both traditional neural networks and recent deep learning models

Inhaltsverzeichnis

An Introduction to Neural Networks.- The Backpropagation Algorithm.- Machine Learning with Shallow Neural Networks.- Deep Learning: Principles and Training Algorithms.- Teaching a Deep Neural Network to Generalize.- Radial Basis Function Networks.- Restricted Boltzmann Machines.- Recurrent Neural Networks.- Convolutional Neural Networks.- Graph Neural Networks.- Deep Reinforcement Learning.- Advanced Topics in Deep Learning.

Details
Erscheinungsjahr: 2023
Genre: Informatik, Mathematik, Medizin, Naturwissenschaften, Technik
Rubrik: Naturwissenschaften & Technik
Medium: Buch
Inhalt: xxiv
529 S.
128 s/w Illustr.
22 farbige Illustr.
529 p. 150 illus.
22 illus. in color.
ISBN-13: 9783031296413
ISBN-10: 3031296419
Sprache: Englisch
Ausstattung / Beilage: HC runder Rücken kaschiert
Einband: Gebunden
Autor: Aggarwal, Charu C.
Auflage: Second Edition 2023
Hersteller: Springer International Publishing
Springer International Publishing AG
Verantwortliche Person für die EU: Springer Verlag GmbH, Tiergartenstr. 17, D-69121 Heidelberg, juergen.hartmann@springer.com
Maße: 260 x 183 x 36 mm
Von/Mit: Charu C. Aggarwal
Erscheinungsdatum: 30.06.2023
Gewicht: 1,23 kg
Artikel-ID: 126667572
Über den Autor
Charu C. Aggarwal is a Distinguished Research Staff Member(DRSM) at the IBM T. J. Watson Research Center in Yorktown Heights, New York. He completed his undergraduate degree in Computer Science from the Indian Institute of Technology at Kanpur in 1993 and his Ph.D. from the Massachusetts Institute of Technology in 1996. He has worked extensively in the field of data mining. He has published more than 400 papers in refereed conferences and journals and authored over 80 patents. He is the author or editor of 20 books, including textbooks on data mining, recommender systems, and outlier analysis. Because of the commercial value of his patents, he has thrice been designated a Master Inventor at IBM. He is a recipient of an IBM Corporate Award (2003) for his work on bio-terrorist threat detection in data streams, a recipient of the IBM Outstanding Innovation Award (2008) for his scientific contributions to privacy technology, and a recipient of two IBM Outstanding Technical AchievementAwards (2009, 2015) for his work on data streams/high-dimensional data. He received the EDBT 2014 Test of Time Award for his work on condensation-based privacy-preserving data mining. He is a recipient of the IEEE ICDM Research Contributions Award (2015) and ACM SIGKDD Innovation Award, which are the two most prestigious awards for influential research contributions in the field of data mining. He is also a recipient of the W. Wallace McDowell Award, which is the highest award given solely by the IEEE Computer Society across the field of Computer Science.
He has served as the general co-chair of the IEEE Big Data Conference (2014) and as the program co-chair of the ACM CIKM Conference (2015), the IEEE ICDM Conference (2015), and the ACM KDD Conference (2016). He served as an associate editor of the IEEE Transactions on Knowledge and Data Engineering from 2004 to 2008. He is an associate editor of the IEEE Transactions on Big Data, an action editor of the DataMining and Knowledge Discovery Journal, and an associate editor of the Knowledge and Information System Journal. He has served or currently serves as the editor-in-chief of the ACM Transactions on Knowledge Discovery from Data as well as the ACM SIGKDD Explorations. He is also an editor-in-chief of ACM Books. He serves on the advisory board of the Lecture Notes on Social Networks, a publication by Springer. He has served as the vice-president of the SIAM Activity Group on Data Mining and is a member of the SIAM industry committee. He is a fellow of the SIAM, ACM, and the IEEE, for "contributions to knowledge discovery and data mining algorithms.
Zusammenfassung

Simple and intuitive discussions of neural networks and deep learning

Provides mathematical details without losing the reader in complexity

Includes exercises and examples

Discusses both traditional neural networks and recent deep learning models

Inhaltsverzeichnis

An Introduction to Neural Networks.- The Backpropagation Algorithm.- Machine Learning with Shallow Neural Networks.- Deep Learning: Principles and Training Algorithms.- Teaching a Deep Neural Network to Generalize.- Radial Basis Function Networks.- Restricted Boltzmann Machines.- Recurrent Neural Networks.- Convolutional Neural Networks.- Graph Neural Networks.- Deep Reinforcement Learning.- Advanced Topics in Deep Learning.

Details
Erscheinungsjahr: 2023
Genre: Informatik, Mathematik, Medizin, Naturwissenschaften, Technik
Rubrik: Naturwissenschaften & Technik
Medium: Buch
Inhalt: xxiv
529 S.
128 s/w Illustr.
22 farbige Illustr.
529 p. 150 illus.
22 illus. in color.
ISBN-13: 9783031296413
ISBN-10: 3031296419
Sprache: Englisch
Ausstattung / Beilage: HC runder Rücken kaschiert
Einband: Gebunden
Autor: Aggarwal, Charu C.
Auflage: Second Edition 2023
Hersteller: Springer International Publishing
Springer International Publishing AG
Verantwortliche Person für die EU: Springer Verlag GmbH, Tiergartenstr. 17, D-69121 Heidelberg, juergen.hartmann@springer.com
Maße: 260 x 183 x 36 mm
Von/Mit: Charu C. Aggarwal
Erscheinungsdatum: 30.06.2023
Gewicht: 1,23 kg
Artikel-ID: 126667572
Sicherheitshinweis

Ähnliche Produkte

Ähnliche Produkte