[Time zone converter] Join us for this immersive, hands-on session that shows how to harness the power of retrieval-augmented generation (RAG) to enhance local Large Language Models (LLMs). Developers will have the opportunity to follow the examples and code via Intel® Tiber™ AI Cloud. Gain the skills and knowledge in this interactive session to design and implement RAG-based AI systems using local LLMs, eliminating the need for cloud-based services and ensuring data privacy and security. Key takeaways Topics included in this workshop include: - Understand the fundamentals of local LLMs and RAG-based AI
- Integrate local LLMs with RAG-based AI systems with hands-on demonstrations
- Deploy local LLMs, using popular frameworks and tools, including Hugging Face Transformers and PyTorch. Maximize security on your own hardware, ensuring optimal data privacy.
- Integrate local LLMs with RAG-based AI systems, retrieving relevant information, augmenting it with context, and producing text very much like human speech.
- Explore real-world use cases and case studies of local LLM-powered, RAG-based AI applications
Hands-on demonstrations of coding techniques require an Intel Tiber AI Cloud account. If you don’t have one, get one here. The workshop targets intermediate to advanced developers. The session will give developers practical knowledge of how to combine information and provide context with RAG-based AI to deepen the user experience. |