32 subscribers
התחל במצב לא מקוון עם האפליקציה Player FM !
פודקאסטים ששווה להאזין
בחסות


1 The trick to powerful public speaking | Lawrence Bernstein 17:27
Code generation
Manage episode 294138556 series 2921809
Why does PyTorch use code generation as part of its build process? Why doesn't it use C++ templates? What things is code generation used for? What are the pros/consof using code generation? What are some other ways to do the same things we currently do with code generation?
Further reading.
- Top level file for the new code generation pipeline https://github.com/pytorch/pytorch/blob/master/tools/codegen/gen.py
- Out of tree external backend code generation from Brian Hirsh: https://github.com/pytorch/xla/issues/2871
- Documentation for native_functions.yaml https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/README.md (have you seen this README before? Yes you've seen this README before. Imma post it again.)
Outline:
- High level: reduce the amount of code in PyTorch, easier to develop
- Strongly typed python
- Stuff we're using codegen for
- Meta point: stuff c++ metaprogramming can't do
- C++ apis (functions, methods on classes)
- Especially for forwarding (operator dot doko)
- Prototypes for c++ to implement
- YAML files used by external frameworks for binding (accidental)
- Python arg parsing
- pyi generation
- Autograd classes for saving saved data
- Otherwise complicated constexpr computation (e.g., parsing JIT
schema)
- Pros
- Better surface syntax (native_functions.yaml, jit schema,
derivatives.yaml) - Better error messages (template messages famously bad)
- Easier to organize complicated code; esp nontrivial input
data structure - Easier to debug by looking at generated code
- Better surface syntax (native_functions.yaml, jit schema,
- Con
- Not as portable (template can be used by anyone)
- Less good modeling for C++ type based metaprogramming (we've replicated a crappy version of C++ type system in our codegen)
- Counterpoints in the design space
- C++ templates: just as efficient
- Boxed fallback: simpler, less efficient
- Open question: can you have best of both worlds, e.g., with partially evaluated interpreters?
83 פרקים
Manage episode 294138556 series 2921809
Why does PyTorch use code generation as part of its build process? Why doesn't it use C++ templates? What things is code generation used for? What are the pros/consof using code generation? What are some other ways to do the same things we currently do with code generation?
Further reading.
- Top level file for the new code generation pipeline https://github.com/pytorch/pytorch/blob/master/tools/codegen/gen.py
- Out of tree external backend code generation from Brian Hirsh: https://github.com/pytorch/xla/issues/2871
- Documentation for native_functions.yaml https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/README.md (have you seen this README before? Yes you've seen this README before. Imma post it again.)
Outline:
- High level: reduce the amount of code in PyTorch, easier to develop
- Strongly typed python
- Stuff we're using codegen for
- Meta point: stuff c++ metaprogramming can't do
- C++ apis (functions, methods on classes)
- Especially for forwarding (operator dot doko)
- Prototypes for c++ to implement
- YAML files used by external frameworks for binding (accidental)
- Python arg parsing
- pyi generation
- Autograd classes for saving saved data
- Otherwise complicated constexpr computation (e.g., parsing JIT
schema)
- Pros
- Better surface syntax (native_functions.yaml, jit schema,
derivatives.yaml) - Better error messages (template messages famously bad)
- Easier to organize complicated code; esp nontrivial input
data structure - Easier to debug by looking at generated code
- Better surface syntax (native_functions.yaml, jit schema,
- Con
- Not as portable (template can be used by anyone)
- Less good modeling for C++ type based metaprogramming (we've replicated a crappy version of C++ type system in our codegen)
- Counterpoints in the design space
- C++ templates: just as efficient
- Boxed fallback: simpler, less efficient
- Open question: can you have best of both worlds, e.g., with partially evaluated interpreters?
83 פרקים
כל הפרקים
×
1 DataLoader with multiple workers leaks memory 16:38


1 Multiple dispatch in __torch_function__ 14:20


1 Asynchronous versus synchronous execution 15:03


1 torch.use_deterministic_algorithms 10:50




1 API design via lexical and dynamic scoping 21:44













1 Dispatcher questions with Sherlock 18:36





1 Tensor subclasses and Liskov substitution principle 19:13

ברוכים הבאים אל Player FM!
Player FM סורק את האינטרנט עבור פודקאסטים באיכות גבוהה בשבילכם כדי שתהנו מהם כרגע. זה יישום הפודקאסט הטוב ביותר והוא עובד על אנדרואיד, iPhone ואינטרנט. הירשמו לסנכרון מנויים במכשירים שונים.