Player FM - Internet Radio Done Right
32 subscribers
Checked 9M ago
הוסף לפני four שנים
תוכן מסופק על ידי PyTorch, Edward Yang, and Team PyTorch. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי PyTorch, Edward Yang, and Team PyTorch או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
Player FM - אפליקציית פודקאסט
התחל במצב לא מקוון עם האפליקציה Player FM !
התחל במצב לא מקוון עם האפליקציה Player FM !
פודקאסטים ששווה להאזין
בחסות
<
<div class="span index">1</div> <span><a class="" data-remote="true" data-type="html" href="/series/this-is-womans-work-with-nicole-kalil">This Is Woman's Work with Nicole Kalil</a></span>


Together, we're redefining what it means, looks and feels like, to be doing "woman's work" in the world today. With confidence and the occasional rant. From boardrooms to studios, kitchens to coding dens, we explore the multifaceted experiences of today's woman, confirming that the new definition of "woman's work" is whatever feels authentic, true, and right for you. We're shedding expectations, setting aside the "shoulds", giving our finger to the "supposed tos". We're torching the old playbook and writing our own rules. Who runs the world? You decide. Learn more at nicolekalil.com
Learning rate schedulers
Manage episode 331487266 series 2921809
תוכן מסופק על ידי PyTorch, Edward Yang, and Team PyTorch. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי PyTorch, Edward Yang, and Team PyTorch או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
What’s a learning rate? Why might you want to schedule it? How does the LR scheduler API in PyTorch work? What the heck is up with the formula implementation? Why is everything terrible?
83 פרקים
Manage episode 331487266 series 2921809
תוכן מסופק על ידי PyTorch, Edward Yang, and Team PyTorch. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי PyTorch, Edward Yang, and Team PyTorch או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
What’s a learning rate? Why might you want to schedule it? How does the LR scheduler API in PyTorch work? What the heck is up with the formula implementation? Why is everything terrible?
83 פרקים
Todos los episodios
×P
PyTorch Developer Podcast

Compiler collectives are a PT2 feature where by compiler instances across multiple ranks use NCCL collectives to communicate information to other instances. This is used to ensure we consistently decide if inputs or static or dynamic across all ranks. See also PR at https://github.com/pytorch/pytorch/pull/130935…
P
PyTorch Developer Podcast

TORCH_TRACE and tlparse are a structured log and log parser for PyTorch 2. It gives useful information about what code was compiled and what the intermediate build products look like.
P
PyTorch Developer Podcast

Higher order operators are a special form of operators in torch.ops which have relaxed input argument requirements: in particular, they can accept any form of argument, including Python callables. Their name is based off of their most common use case, which is to represent higher order functions like control flow operators. However, they are also used to implement other variants of basic operators and can also be used to smuggle in Python data that is quite unusual. They are implemented using a Python dispatcher.…
P
PyTorch Developer Podcast

The post-grad FX passes in Inductor run after AOTAutograd has functionalized and normalized the input program into separate forward/backward graphs. As such, they generally can assume that the graph in question is functionalized, except for some mutations to inputs at the end of the graph. At the end of post-grad passes, there are special passes that reintroduce mutation into the graph before going into the rest of Inductor lowering which is generally aware of passes. The post-grad FX passes are varied but are typically domain specific passes making local changes to specific parts of the graph.…
P
PyTorch Developer Podcast

CUDA graph trees are the internal implementation of CUDA graphs used in PT2 when you say mode="reduce-overhead". Their primary innovation is that they allow the reuse of memory across multiple CUDA graphs, as long as they form a tree structure of potential paths you can go down with the CUDA graph. This greatly reduced the memory usage of CUDA graphs in PT2. There are some operational implications to using CUDA graphs which are described in the podcast.…
P
PyTorch Developer Podcast

The min-cut partitioner makes decisions about what to save for backwards when splitting the forward and backwards graph from the joint graph traced by AOTAutograd. Crucially, it doesn't actually do a "split"; instead, it is deciding how much of the joint graph should be used for backwards. I also talk about the backward retracing problem.…
P
PyTorch Developer Podcast

AOTInductor is a feature in PyTorch that lets you export an inference model into a self-contained dynamic library, which can subsequently be loaded and used to run optimized inference. It is aimed primarily at CUDA and CPU inference applications, for situations when your model export once to be exported once while your runtime may still get continuous updates. One of the big underlying organizing principles is a limited ABI which does not include libtorch, which allows these libraries to stay stable over updates to the runtime. There are many export-like use cases you might be interested in using AOTInductor for, and some of the pieces should be useful, but AOTInductor does not necessarily solve them.…
P
PyTorch Developer Podcast

Tensor subclasses allow you to add extend PyTorch with new types of tensors without having to write any C++. They have been used to implement DTensor, FP8, Nested Jagged Tensor and Complex Tensor. Recent work by Brian Hirsh means that we can compile tensor subclasses in PT2, eliminating their overhead. The basic mechanism by which this compilation works is a desugaring process in AOTAutograd. There are some complications involving views, dynamic shapes and tangent metadata mismatch.…
P
PyTorch Developer Podcast

Compiled autograd is an extension to PT2 that permits compiling the entirety of a backward() call in PyTorch. This allows us to fuse accumulate grad nodes as well as trace through arbitrarily complicated Python backward hooks. Compiled autograd is an important part of our plans for compiled DDP/FSDP as well as for whole-graph compilation.…
P
PyTorch Developer Podcast

We discuss some extension points for customizing PT2 behavior across Dynamo, AOTAutograd and Inductor.
P
PyTorch Developer Podcast

Define-by-run IR is how Inductor defines the internal compute of a pointwise/reduction operation. It is characterized by a function that calls a number of functions in the 'ops' namespace, where these ops can be overridden by different handlers depending on what kind of semantic analysis you need to do. The ops Inductor supports include regular arithmetic operators, but also memory load/store, indirect indexing, masking and collective operations like reductions.…
P
PyTorch Developer Podcast

Traditionally, unsigned integer support in PyTorch was not great; we only support uint8. Recently, we added support for uint16, uint32 and uint64. Bare bones functionality works, but I'm entreating the community to help us build out the rest. In particular, for most operations, we plan to use PT2 to build anything else. But if you have an eager kernel you really need, send us a PR and we'll put it in. While most of the implementation was straightforward, there are some weirdnesses related to type promotion inconsistencies with numpy and dealing with the upper range of uint64. There is also upcoming support for sub-byte dtypes uint1-7, and these will exclusively be implemented via PT2.…
P
PyTorch Developer Podcast

Inductor IR is an intermediate representation that lives between ATen FX graphs and the final Triton code generated by Inductor. It was designed to faithfully represent PyTorch semantics and accordingly models views, mutation and striding. When you write a lowering from ATen operators to Inductor IR, you get a TensorBox for each Tensor argument which contains a reference to the underlying IR (via StorageBox, and then a Buffer/ComputedBuffer) that says how the Tensor was computed. The inner computation is represented via define-by-run, which allows for compact definition of IR representation, while still allowing you to extract an FX graph out if you desire. Scheduling then takes buffers of inductor IR and decides what can be fused. Inductor IR may have too many nodes, this would be a good thing to refactor in the future.…
P
PyTorch Developer Podcast

I talk about VariableTracker in Dynamo. VariableTracker is Dynamo's representation of the Python. I talk about some recent changes, namely eager guards and mutable VT. I also tell you how to find the functionality you care about in VariableTracker ( https://docs.google.com/document/d/1XDPNK3iNNShg07jRXDOrMk2V_i66u1hEbPltcsxE-3E/edit#heading=h.i6v7gqw5byv6 ).…
P
PyTorch Developer Podcast

This podcast goes over the basics of unbacked SymInts. You might want to listen to this one before listening to https://pytorch-dev-podcast.simplecast.com/episodes/zero-one-specialization Some questions we answer (h/t from Gregory Chanan): - Are unbacked symints only for export? Because otherwise I could just break / wait for the actual size. But maybe I can save some retracing / graph breaks perf if I have them too? So the correct statement is "primarily" for export? - Why am I looking into the broadcasting code at all? Naively, I would expect the export graph to be just a list of ATen ops strung together. Why do I recurse that far down? Why can't I annotate DONT_TRACE_ME_BRO? - How does 0/1 specialization fit into this? I understand we may want to 0/1 specialize in a dynamic shape regime in "eager" mode (is there a better term?), but that doesn't seem to matter for export? - So far we've mainly been talking about how to handle our own library code. There is a worry about pushing complicated constraints downstream, similar to torchscript. What constraints does this actually push?…
P
PyTorch Developer Podcast

1 DataLoader with multiple workers leaks memory 16:38
16:38
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי16:38
Today I'm going to talk about a famous issue in PyTorch, DataLoader with num_workers > 0 causes memory leak ( https://github.com/pytorch/pytorch/issues/13246 ). This bug is a good opportunity to talk about DataSet/DataLoader design in PyTorch, fork and copy-on-write memory in Linux and Python reference counting; you have to know about all of these things to understand why this bug occurs, but once you do, it also explains why the workarounds help. Further reading. A nice summary of the full issue https://github.com/pytorch/pytorch/issues/13246#issuecomment-905703662 DataLoader architecture RFC https://github.com/pytorch/pytorch/issues/49440 Cinder Python https://github.com/facebookincubator/cinder…
P
PyTorch Developer Podcast

PyTorch operates on its input data in a batched manner, typically processing multiple batches of an input at once (rather than once at a time, as would be the case in typical programming). In this podcast, we talk a little about the implications of batching operations in this way, and then also about how PyTorch's API is structured for batching (hint: poorly) and how Numpy introduced a concept of ufunc/gufuncs to standardize over broadcasting and batching behavior. There is some overlap between this podcast and previous podcasts about TensorIterator and vmap; you may also be interested in those episodes. Further reading. ufuncs and gufuncs https://numpy.org/doc/stable/reference/ufuncs.html and https://numpy.org/doc/stable/reference/c-api/generalized-ufuncs.html A brief taxonomy of PyTorch operators by shape behavior http://blog.ezyang.com/2020/05/a-brief-taxonomy-of-pytorch-operators-by-shape-behavior/ Related episodes on TensorIterator and vmap https://pytorch-dev-podcast.simplecast.com/episodes/tensoriterator and https://pytorch-dev-podcast.simplecast.com/episodes/vmap…
P
PyTorch Developer Podcast

1 Multiple dispatch in __torch_function__ 14:20
14:20
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי14:20
Python is a single dispatch OO language, but there are some operations such as binary magic methods which implement a simple form of multiple dispatch. torch_function__ (through its Numpy predecessor __array_function ) generalizes this mechanism so that invocations of torch.add with different subclasses work properly. This podcast describes how this mechanism works and how it can be used (in an unconventional way) to build composable subclasses ala JAX in functorch. Further reading: This podcast in written form https://dev-discuss.pytorch.org/t/functorch-levels-as-dynamically-allocated-classes/294 Multiple dispatch resolution rules in the RFC https://github.com/pytorch/rfcs/blob/master/RFC-0001-torch-function-for-methods.md#process-followed-during-a-functionmethod-call…
P
PyTorch Developer Podcast

Writing multithreading code has always been a pain, and in PyTorch there are buckets and buckets of multithreading related issues you have to be aware about and deal with when writing code that makes use of it. We'll cover how you interface with multithreading in PyTorch, what goes into implementing those interfaces (thread pools!) and also some miscellaneous stuff like TLS, forks and data structure thread safety that is also relevant. Further reading: TorchScript CPU inference threading documentation https://github.com/pytorch/pytorch/blob/master/docs/source/notes/cpu_threading_torchscript_inference.rst c10 thread pool https://github.com/pytorch/pytorch/blob/master/c10/core/thread_pool.h and autograd thread pool https://github.com/pytorch/pytorch/blob/master/torch/csrc/autograd/engine.cpp Tracking issue for TLS propagation across threads https://github.com/pytorch/pytorch/issues/28520…
P
PyTorch Developer Podcast

1 Asynchronous versus synchronous execution 15:03
15:03
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי15:03
CUDA is asynchronous, CPU is synchronous. Making them play well together can be one of the more thorny and easy to get wrong aspects of the PyTorch API. I talk about why non_blocking is difficult to use correctly, a hypothetical "asynchronous CPU" device which would help smooth over some of the API problems and also why it used to be difficult to implement async CPU (but it's not hard anymore!) At the end, I also briefly talk about how async/sync impedance can also show up in unusual places, namely the CUDA caching allocator. Further reading. CUDA semantics which discuss non_blocking somewhat https://pytorch.org/docs/stable/notes/cuda.html Issue requesting async cpu https://github.com/pytorch/pytorch/issues/44343…
P
PyTorch Developer Podcast

We talk about gradcheck, the property based testing mechanism that we use to verify the correctness of analytic gradient formulas in PyTorch. I'll talk a bit about testing in general, property based testing and why gradcheck is a particularly useful property based test. There will be some calculus, although I've tried to keep the math mostly to intuitions and pointers on what to read up on elsewhere. Further reading. Gradcheck mechanics, a detailed mathematical explanation of how it works https://pytorch.org/docs/stable/notes/gradcheck.html In particular, it also explains how gradcheck extends to complex numbers JAX has a pretty good explanation about vjp and jvp at https://jax.readthedocs.io/en/latest/notebooks/autodiff_cookbook.html Fast gradcheck tracking issue https://github.com/pytorch/pytorch/issues/53876…
P
PyTorch Developer Podcast

1 torch.use_deterministic_algorithms 10:50
10:50
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי10:50
torch.use_deterministic_algorithms lets you force PyTorch to use deterministic algorithms. It's very useful for debugging! There are some errors in the recording: the feature is called torch.use_deterministic_algorithms, and there is not actually a capability to warn (this was in an old version of the PR but taken out), we just error if you hit nondeterministic code. Docs: https://pytorch.org/docs/stable/generated/torch.use_deterministic_algorithms.html#torch.use_deterministic_algorithms…
P
PyTorch Developer Podcast

Reference counting is a common memory management technique in C++ but PyTorch does its reference counting in a slightly idiosyncratic way using intrusive_ptr. We'll talk about why intrusive_ptr exists, the reason why refcount bumps are slow in C++ (but not in Python), what's up with const Tensor& everywhere, why the const is a lie and how TensorRef lets you create a const Tensor& from a TensorImpl* without needing to bump your reference count. Further reading. Why you shouldn't feel bad about passing tensor by reference https://dev-discuss.pytorch.org/t/we-shouldnt-feel-bad-about-passing-tensor-by-reference/85 Const correctness in PyTorch https://github.com/zdevito/ATen/issues/27 TensorRef RFC https://github.com/pytorch/rfcs/pull/16…
P
PyTorch Developer Podcast

Memory layout specifies how the logical multi-dimensional tensor maps its elements onto physical linear memory. Some layouts admit more efficient implementations, e.g., NCHW versus NHWC. Memory layout makes use of striding to allow users to conveniently represent their tensors with different physical layouts without having to explicitly tell every operator what to do. Further reading. Tutorial https://pytorch.org/tutorials/intermediate/memory_format_tutorial.html Memory format RFC https://github.com/pytorch/pytorch/issues/19092 Layout permutation proposal (not implemented) https://github.com/pytorch/pytorch/issues/32078…
P
PyTorch Developer Podcast

pytorch-probot is a GitHub application that we use to automate common tasks in GitHub. I talk about what it does and some design philosophy for it. Repo is at: https://github.com/pytorch/pytorch-probot
P
PyTorch Developer Podcast

1 API design via lexical and dynamic scoping 21:44
21:44
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי21:44
Lexical and dynamic scoping are useful tools to reason about various API design choices in PyTorch, related to context managers, global flags, dynamic dispatch, and how to deal with BC-breaking changes. I'll walk through three case studies, one from Python itself (changing the meaning of division to true division), and two from PyTorch (device context managers, and torch function for factory functions). Further reading. Me unsuccessfully asking around if there was a way to simulate __future__ in libraries https://stackoverflow.com/questions/66927362/way-to-opt-into-bc-breaking-changes-on-methods-within-a-single-module A very old issue asking for a way to change the default GPU device https://github.com/pytorch/pytorch/issues/260 and a global GPU flag https://github.com/pytorch/pytorch/issues/7535 A more modern issue based off the lexical module idea https://github.com/pytorch/pytorch/issues/27878 Array module NEP https://numpy.org/neps/nep-0037-array-module.html…
P
PyTorch Developer Podcast

Today, Shen Li (mrshenli) joins me to talk about distributed computation in PyTorch. What is distributed? What kinds of things go into making distributed work in PyTorch? What's up with all of the optimizations people want to do here? Further reading. PyTorch distributed overview https://pytorch.org/tutorials/beginner/dist_overview.html Distributed data parallel https://pytorch.org/docs/stable/notes/ddp.html…
P
PyTorch Developer Podcast

Double backwards is PyTorch's way of implementing higher order differentiation. Why might you want it? How does it work? What are some of the weird things that happen when you do this? Further reading. Epic PR that added double backwards support for convolution initially https://github.com/pytorch/pytorch/pull/1643…
P
PyTorch Developer Podcast

Functional modules are a proposed mechanism to take PyTorch's existing NN module API and transform it into a functional form, where all the parameters are explicit argument. Why would you want to do this? What does functorch have to do with it? How come PyTorch's existing APIs don't seem to need this? What are the design problems? Further reading. Proposal in GitHub issues https://github.com/pytorch/pytorch/issues/49171 Linen design in flax https://flax.readthedocs.io/en/latest/design_notes/linen_design_principles.html…
P
PyTorch Developer Podcast

What are CUDA graphs? How are they implemented? What does it take to actually use them in PyTorch? Further reading. NVIDIA has docs on CUDA graphs https://developer.nvidia.com/blog/cuda-graphs/ Nuts and bolts implementation PRs from mcarilli: https://github.com/pytorch/pytorch/pull/51436 https://github.com/pytorch/pytorch/pull/46148…
ברוכים הבאים אל Player FM!
Player FM סורק את האינטרנט עבור פודקאסטים באיכות גבוהה בשבילכם כדי שתהנו מהם כרגע. זה יישום הפודקאסט הטוב ביותר והוא עובד על אנדרואיד, iPhone ואינטרנט. הירשמו לסנכרון מנויים במכשירים שונים.