[Time zone converter] [Available On-Demand on Thursday, September 12] These back-to-back workshops, suitable for all skill levels, provide two approaches to accelerating performance with AI PCs. The morning session delves into techniques for effectively using Neural Processing Units (NPUs) in combination with CPUs and GPUs to build GenAI applications. The late morning session explains how to use ONNX effectively with OpenVINO as a backend, achieving performance gains using ONNX Runtime APIs and implementing the OpenVINO™ Execution Provider on the AI PC. Topics covered in the morning session include: - Understand large language modes, the advantages of local inference, and the challenges encountered.
- See how acceleration of AI workloads can be accomplished for Intel® Core™ Ultra processors, leveraging the benefits of NPUs technology.
- Discover techniques for quick prototyping of LLMs using the Intel Core Ultra processor with the Intel® NPU Acceleration Library.
- Demonstrate how to deploy NPUs with Intel® OpenVINO™ toolkit and the NPU plugin.
Topics covered in the late morning session include: - Learn the characteristics of an AI PC and the benefits these systems offer developers.
- Understand the techniques for inferencing and deploying ONNX models on an AI PC.
- Evaluate performance of ONNX models on AI PC systems with a combination of OpenVINO, ONNX, and OpenVINO Execution Runtime Provider for ONNX.
- Learn how to build a standalone app for an AI PC with OpenVINO Execution Runtime Provider for ONNX.
Together, these workshops round out your skills in AI and show you the optimal techniques to boosting acceleration using the unique characteristics of AI PCs. Join us for both sessions and enrich your understanding of these key concepts. Skill level: All |