79,75 €*
Versandkostenfrei per Post / DHL
Lieferzeit 1-2 Wochen
Progress in Machine Learning is increasing the use of artificial agents to perform critical tasks previously handled by humans (healthcare, legal and finance, among others). While the principles that guide the design of these agents are understood, most of the current deep-learning models are "opaque" to human understanding. Explainable AI with Python fills the current gap in literature on this emerging topic by taking both a theoretical and a practical perspective, making the reader quickly capable of working with tools and code for Explainable AI.
Beginning with examples of what Explainable AI (XAI) is and why it is needed in the field, the book details different approaches to XAI depending on specific context and need. Hands-on work on interpretable models with specific examples leveraging Python are then presented, showing how intrinsic interpretable models can be interpreted and how to produce ¿human understandable¿ explanations. Model-agnostic methods for XAI are shown to produce explanations without relying on ML models internals that are ¿opaque.¿ Using examples from Computer Vision, the authors then look at explainable models for Deep Learning and prospective methods for the future. Taking a practical perspective, the authors demonstrate how to effectively use ML and XAI in science. The final chapter explains Adversarial Machine Learning and how to do XAI with adversarial examples.
Progress in Machine Learning is increasing the use of artificial agents to perform critical tasks previously handled by humans (healthcare, legal and finance, among others). While the principles that guide the design of these agents are understood, most of the current deep-learning models are "opaque" to human understanding. Explainable AI with Python fills the current gap in literature on this emerging topic by taking both a theoretical and a practical perspective, making the reader quickly capable of working with tools and code for Explainable AI.
Beginning with examples of what Explainable AI (XAI) is and why it is needed in the field, the book details different approaches to XAI depending on specific context and need. Hands-on work on interpretable models with specific examples leveraging Python are then presented, showing how intrinsic interpretable models can be interpreted and how to produce ¿human understandable¿ explanations. Model-agnostic methods for XAI are shown to produce explanations without relying on ML models internals that are ¿opaque.¿ Using examples from Computer Vision, the authors then look at explainable models for Deep Learning and prospective methods for the future. Taking a practical perspective, the authors demonstrate how to effectively use ML and XAI in science. The final chapter explains Adversarial Machine Learning and how to do XAI with adversarial examples.
Leonida Gianfagna (Phd, MBA) is a theoretical physicist that is currently working in Cyber Security as R&D director for Cyber Guru. Before joining Cyber Guru he worked in IBM for 15 years covering leading roles in software development in ITSM (IT Service Management). He is the author of several publications in theoretical physics and computer science and accredited as IBM Master Inventor (15+ filings).
Antonio Di Cecco is a theoretical physicist with a strong mathematical background that is fully engaged on delivering education on AIML at different levels from dummies to experts (face to face classes and remotely). The main strength of his approach is the deep-diving of the mathematical foundations of AIML models that open new angles to present the AIML knowledge and space of improvements for the existing state of art. Antonio has also a "Master in Economics" with focus innovation and teaching experiences. He is leading School of AI in Italy with chapters in Rome and Pescara
Offers a high-level perspective that explains the basics of XAI and its impacts on business and society, as well as a useful guide for machine learning practitioners to understand the current techniques to achieve explainability for AIML systems
Fills the gaps to acquire the basic knowledge both from a theoretical and a practical perspective (with examples and direct implementation) making the reader quickly capable of working with tools and code for explainable AI
Explains methods for the intrinsic interpretable ML models and agnostic methods for the non-interpretable ones
Erscheinungsjahr: | 2021 |
---|---|
Genre: | Informatik, Mathematik, Medizin, Naturwissenschaften, Technik |
Rubrik: | Naturwissenschaften & Technik |
Medium: | Taschenbuch |
Inhalt: |
viii
202 S. 16 s/w Illustr. 103 farbige Illustr. 202 p. 119 illus. 103 illus. in color. |
ISBN-13: | 9783030686390 |
ISBN-10: | 3030686396 |
Sprache: | Englisch |
Ausstattung / Beilage: | Paperback |
Einband: | Kartoniert / Broschiert |
Autor: |
Di Cecco, Antonio
Gianfagna, Leonida |
Auflage: | 1st ed. 2021 |
Hersteller: |
Springer Nature Switzerland
Springer International Publishing Springer International Publishing AG |
Verantwortliche Person für die EU: | Books on Demand GmbH, In de Tarpen 42, D-22848 Norderstedt, info@bod.de |
Maße: | 235 x 155 x 12 mm |
Von/Mit: | Antonio Di Cecco (u. a.) |
Erscheinungsdatum: | 29.04.2021 |
Gewicht: | 0,33 kg |
Leonida Gianfagna (Phd, MBA) is a theoretical physicist that is currently working in Cyber Security as R&D director for Cyber Guru. Before joining Cyber Guru he worked in IBM for 15 years covering leading roles in software development in ITSM (IT Service Management). He is the author of several publications in theoretical physics and computer science and accredited as IBM Master Inventor (15+ filings).
Antonio Di Cecco is a theoretical physicist with a strong mathematical background that is fully engaged on delivering education on AIML at different levels from dummies to experts (face to face classes and remotely). The main strength of his approach is the deep-diving of the mathematical foundations of AIML models that open new angles to present the AIML knowledge and space of improvements for the existing state of art. Antonio has also a "Master in Economics" with focus innovation and teaching experiences. He is leading School of AI in Italy with chapters in Rome and Pescara
Offers a high-level perspective that explains the basics of XAI and its impacts on business and society, as well as a useful guide for machine learning practitioners to understand the current techniques to achieve explainability for AIML systems
Fills the gaps to acquire the basic knowledge both from a theoretical and a practical perspective (with examples and direct implementation) making the reader quickly capable of working with tools and code for explainable AI
Explains methods for the intrinsic interpretable ML models and agnostic methods for the non-interpretable ones
Erscheinungsjahr: | 2021 |
---|---|
Genre: | Informatik, Mathematik, Medizin, Naturwissenschaften, Technik |
Rubrik: | Naturwissenschaften & Technik |
Medium: | Taschenbuch |
Inhalt: |
viii
202 S. 16 s/w Illustr. 103 farbige Illustr. 202 p. 119 illus. 103 illus. in color. |
ISBN-13: | 9783030686390 |
ISBN-10: | 3030686396 |
Sprache: | Englisch |
Ausstattung / Beilage: | Paperback |
Einband: | Kartoniert / Broschiert |
Autor: |
Di Cecco, Antonio
Gianfagna, Leonida |
Auflage: | 1st ed. 2021 |
Hersteller: |
Springer Nature Switzerland
Springer International Publishing Springer International Publishing AG |
Verantwortliche Person für die EU: | Books on Demand GmbH, In de Tarpen 42, D-22848 Norderstedt, info@bod.de |
Maße: | 235 x 155 x 12 mm |
Von/Mit: | Antonio Di Cecco (u. a.) |
Erscheinungsdatum: | 29.04.2021 |
Gewicht: | 0,33 kg |