Loading...
Ebook details
Log in if you are interested in the contents of the item.
AI-Native LLM Security. Threats, defenses, and best practices for building safe and trustworthy AI
Vaibhav Malik, Ken Huang, Ads Dawson
Loading...
EBOOK
Loading...
Adversarial AI attacks present a unique set of security challenges, exploiting the very foundation of how AI learns. This book explores these threats in depth, equipping cybersecurity professionals with the tools needed to secure generative AI and LLM applications. Rather than skimming the surface of emerging risks, it focuses on practical strategies, industry standards, and recent research to build a robust defense framework.
Structured around actionable insights, the chapters introduce a secure-by-design methodology, integrating threat modeling and MLSecOps practices to fortify AI systems. You’ll discover how to leverage established taxonomies from OWASP, NIST, and MITRE to identify and mitigate vulnerabilities. Through real-world examples, the book highlights best practices for incorporating security controls into AI development life cycles, covering key areas such as CI/CD, MLOps, and open-access LLMs.
Built on the expertise of its co-authors—pioneers in the OWASP Top 10 for LLM applications—this guide also addresses the ethical implications of AI security, contributing to the broader conversation on trustworthy AI. By the end of this book, you’ll be able to develop, deploy, and secure AI technologies with confidence and clarity.
*Email sign-up and proof of purchase required
Structured around actionable insights, the chapters introduce a secure-by-design methodology, integrating threat modeling and MLSecOps practices to fortify AI systems. You’ll discover how to leverage established taxonomies from OWASP, NIST, and MITRE to identify and mitigate vulnerabilities. Through real-world examples, the book highlights best practices for incorporating security controls into AI development life cycles, covering key areas such as CI/CD, MLOps, and open-access LLMs.
Built on the expertise of its co-authors—pioneers in the OWASP Top 10 for LLM applications—this guide also addresses the ethical implications of AI security, contributing to the broader conversation on trustworthy AI. By the end of this book, you’ll be able to develop, deploy, and secure AI technologies with confidence and clarity.
*Email sign-up and proof of purchase required
- 1. Fundamentals and Introduction to Large Language Models
- 2. Securing Large Language Models
- 3. The Dual Nature of LLM Risks: Inherent Vulnerabilities and Malicious Actors
- 4. Mapping Trust Boundaries in LLM Architectures
- 5. Aligning LLM Security with Organizational Objectives and Regulatory Landscapes
- 6. Identifying and Prioritizing LLM Security Risks with OWASP
- 7. Diving Deep: Profiles of the Top 10 LLM Security Risks
- 8. Mitigating LLM Risks: Strategies and Techniques for Each OWASP Category
- 9. Adapting the OWASP Top 10 to Diverse Deployment Scenarios
- 10. Designing LLM Systems for Security: Architecture, Controls, and Best Practices
- 11. Integrating Security into the LLM Development Life Cycle: From Data Curation to Deployment
- 12. Operational Resilience: Monitoring, Incident Response, and Continuous Improvement
- 13. The Future of LLM Security: Emerging Threats, Promising Defenses, and the Path Forward
- 14. Appendix A
- 15. Appendix B
- Title:AI-Native LLM Security. Threats, defenses, and best practices for building safe and trustworthy AI
- Author:Vaibhav Malik, Ken Huang, Ads Dawson
- Original title:AI-Native LLM Security. Threats, defenses, and best practices for building safe and trustworthy AI
- ISBN:9781836203742, 9781836203742
- Date of issue:2025-12-12
- Format:Ebook
- Item ID: e_4glc
- Publisher: Packt Publishing
Loading...
Loading...