Szczegóły ebooka

Transformers for Natural Language Processing and Computer Vision. Explore Generative AI and Large Language Models with Hugging Face, ChatGPT, GPT-4V, and DALL-E 3 - Third Edition

Transformers for Natural Language Processing and Computer Vision. Explore Generative AI and Large Language Models with Hugging Face, ChatGPT, GPT-4V, and DALL-E 3 - Third Edition

Denis Rothman

Ebook
Transformers for Natural Language Processing and Computer Vision, Third Edition, explores Large Language Model (LLM) architectures, applications, and various platforms (Hugging Face, OpenAI, and Google Vertex AI) used for Natural Language Processing (NLP) and Computer Vision (CV).

The book guides you through different transformer architectures to the latest Foundation Models and Generative AI. You’ll pretrain and fine-tune LLMs and work through different use cases, from summarization to implementing question-answering systems with embedding-based search techniques. You will also learn the risks of LLMs, from hallucinations and memorization to privacy, and how to mitigate such risks using moderation models with rule and knowledge bases. You’ll implement Retrieval Augmented Generation (RAG) with LLMs to improve the accuracy of your models and gain greater control over LLM outputs.

Dive into generative vision transformers and multimodal model architectures and build applications, such as image and video-to-text classifiers. Go further by combining different models and platforms and learning about AI agent replication.

This book provides you with an understanding of transformer architectures, pretraining, fine-tuning, LLM use cases, and best practices.
  • 1. What are Transformers?
  • 2. Getting Started with the Architecture of the Transformer Model
  • 3. Emergent vs Downstream Tasks: The Unseen Depths of Transformers
  • 4. Advancements in Translations with Google Trax, Google Translate, and Gemini
  • 5. Diving into Fine-Tuning through BERT
  • 6. Pretraining a Transformer from Scratch through RoBERTa
  • 7. The Generative AI Revolution with ChatGPT
  • 8. Fine-Tuning OpenAI GPT Models
  • 9. Shattering the Black Box with Interpretable Tools
  • 10. Investigating the Role of Tokenizers in Shaping Transformer Models
  • 11. Leveraging LLM Embeddings as an Alternative to Fine-Tuning
  • 12. Toward Syntax-Free Semantic Role Labeling with ChatGPT and GPT-4
  • 13. Summarization with T5 and ChatGPT
  • 14. Exploring Cutting-Edge LLMs with Vertex AI and PaLM 2
  • 15. Guarding the Giants: Mitigating Risks in Large Language Models
  • 16. Beyond Text: Vision Transformers in the Dawn of Revolutionary AI
  • 17. Transcending the Image-Text Boundary with Stable Diffusion
  • 18. Hugging Face AutoTrain: Training Vision Models without Coding
  • 19. On the Road to Functional AGI with HuggingGPT and its Peers
  • 20. Beyond Human-Designed Prompts with Generative Ideation
  • Tytuł: Transformers for Natural Language Processing and Computer Vision. Explore Generative AI and Large Language Models with Hugging Face, ChatGPT, GPT-4V, and DALL-E 3 - Third Edition
  • Autor: Denis Rothman
  • Tytuł oryginału: Transformers for Natural Language Processing and Computer Vision. Explore Generative AI and Large Language Models with Hugging Face, ChatGPT, GPT-4V, and DALL-E 3 - Third Edition
  • ISBN: 9781805123743, 9781805123743
  • Data wydania: 2024-02-29
  • Format: Ebook
  • Identyfikator pozycji: e_3ua7
  • Wydawca: Packt Publishing