1,763 subscribers
התחל במצב לא מקוון עם האפליקציה Player FM !
פודקאסטים ששווה להאזין
בחסות


Speculative Decoding and Efficient LLM Inference with Chris Lott - #717
Manage episode 464833852 series 2355587
Today, we're joined by Chris Lott, senior director of engineering at Qualcomm AI Research to discuss accelerating large language model inference. We explore the challenges presented by the LLM encoding and decoding (aka generation) and how these interact with various hardware constraints such as FLOPS, memory footprint and memory bandwidth to limit key inference metrics such as time-to-first-token, tokens per second, and tokens per joule. We then dig into a variety of techniques that can be used to accelerate inference such as KV compression, quantization, pruning, speculative decoding, and leveraging small language models (SLMs). We also discuss future directions for enabling on-device agentic experiences such as parallel generation and software tools like Qualcomm AI Orchestrator.
The complete show notes for this episode can be found at https://twimlai.com/go/717.
755 פרקים
Speculative Decoding and Efficient LLM Inference with Chris Lott - #717
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
Manage episode 464833852 series 2355587
Today, we're joined by Chris Lott, senior director of engineering at Qualcomm AI Research to discuss accelerating large language model inference. We explore the challenges presented by the LLM encoding and decoding (aka generation) and how these interact with various hardware constraints such as FLOPS, memory footprint and memory bandwidth to limit key inference metrics such as time-to-first-token, tokens per second, and tokens per joule. We then dig into a variety of techniques that can be used to accelerate inference such as KV compression, quantization, pruning, speculative decoding, and leveraging small language models (SLMs). We also discuss future directions for enabling on-device agentic experiences such as parallel generation and software tools like Qualcomm AI Orchestrator.
The complete show notes for this episode can be found at https://twimlai.com/go/717.
755 פרקים
Alle Folgen
×

1 LLMs for Equities Feature Forecasting at Two Sigma with Ben Wellington - #736 59:31


1 Zero-Shot Auto-Labeling: The End of Annotation for Computer Vision with Jason Corso - #735 56:45


1 Grokking, Generalization Collapse, and the Dynamics of Training Deep Neural Networks with Charles Martin - #734 1:25:21


1 Google I/O 2025 Special Edition - #733 26:21


1 RAG Risks: Why Retrieval-Augmented LLMs are Not Safer with Sebastian Gehrmann - #732 57:09


1 From Prompts to Policies: How RL Builds Better AI Agents with Mahesh Sathiamoorthy - #731 1:01:25


1 How OpenAI Builds AI Agents That Think and Act with Josh Tobin - #730 1:07:27


1 CTIBench: Evaluating LLMs in Cyber Threat Intelligence with Nidhi Rastogi - #729 56:18


1 Generative Benchmarking with Kelly Hong - #728 54:17


1 Exploring the Biology of LLMs with Circuit Tracing with Emmanuel Ameisen - #727 1:34:06


1 Teaching LLMs to Self-Reflect with Reinforcement Learning with Maohao Shen - #726 51:45


1 Waymo's Foundation Model for Autonomous Driving with Drago Anguelov - #725 1:09:07


1 Dynamic Token Merging for Efficient Byte-level Language Models with Julie Kallini - #724 50:32


1 Scaling Up Test-Time Compute with Latent Reasoning with Jonas Geiping - #723 58:38


1 Imagine while Reasoning in Space: Multimodal Visualization-of-Thought with Chengzu Li - #722 42:11
ברוכים הבאים אל Player FM!
Player FM סורק את האינטרנט עבור פודקאסטים באיכות גבוהה בשבילכם כדי שתהנו מהם כרגע. זה יישום הפודקאסט הטוב ביותר והוא עובד על אנדרואיד, iPhone ואינטרנט. הירשמו לסנכרון מנויים במכשירים שונים.