התחל במצב לא מקוון עם האפליקציה Player FM !
פודקאסטים ששווה להאזין
בחסות


1 Battle Camp S1: Reality Rivalries with Dana Moon & QT 1:00:36
Open Pre-Trained Transformer Language Models (OPT): What does it take to train GPT-3?
Manage episode 355037186 series 3446693
Andrew Yates (Assistant Professor at the University of Amsterdam) and Sergi Castella i Sapé discuss the recent "Open Pre-trained Transformer (OPT) Language Models" from Meta AI (formerly Facebook). In this replication work, Meta developed and trained a 175 Billion parameter Transformer very similar to GPT-3 from OpenAI, documenting the process in detail to share their findings with the community. The code, pretrained weights, and logbook are available on their Github repository (links below).
Links
❓Feedback Form: https://scastella.typeform.com/to/rg7a5GfJ
📄 OPT paper: https://arxiv.org/abs/2205.01068
👾 Code: https://github.com/facebookresearch/metaseq
📒 Logbook: https://github.com/facebookresearch/metaseq/blob/main/projects/OPT/chronicles/OPT175B_Logbook.pdf
✍️ OPT Official Blog Post: https://ai.facebook.com/blog/democratizing-access-to-large-scale-language-models-with-opt-175b/
OpenAI Embeddings API: https://openai.com/blog/introducing-text-and-code-embeddings/
Nils Reimers' critique of OpenAI Embeddings API: https://medium.com/@nils_reimers/openai-gpt-3-text-embeddings-really-a-new-state-of-the-art-in-dense-text-embeddings-6571fe3ec9d9
Timestamps:
00:00 Introduction and housekeeping: new feedback form, ACL conference highlights
02:42 The convergence between NLP and Neural IR techniques
06:43 Open Pretrained Transformer motivation and scope, reproducing GPT-3 and open-sourcing
08:16 Basics of OPT: architecture, pre-training objective, teacher forcing, tokenizer, training data
13:40 Preliminary experiments findings: hyperparameters, training stability, spikiness
20:08 Problems that appear at scale when training with 992 GPUs
23:01 Using temperature to check whether GPUs are working
25:00 Training the largest model: what to do when the loss explodes? (which happens quite often)
29:15 When they switched away from AdamW to SGD
32:00 Results: successful but not quite GPT-3 level.
Toxicity? 35:45 Replicability of Large Language Models research. Was GPT-3 replicable? What difference does it make?
37:25 What makes a paper replicable?
40:33 Directions in which large Language Models are applied to Information Retrieval
45:15 Final thoughts and takeaways
21 פרקים
Manage episode 355037186 series 3446693
Andrew Yates (Assistant Professor at the University of Amsterdam) and Sergi Castella i Sapé discuss the recent "Open Pre-trained Transformer (OPT) Language Models" from Meta AI (formerly Facebook). In this replication work, Meta developed and trained a 175 Billion parameter Transformer very similar to GPT-3 from OpenAI, documenting the process in detail to share their findings with the community. The code, pretrained weights, and logbook are available on their Github repository (links below).
Links
❓Feedback Form: https://scastella.typeform.com/to/rg7a5GfJ
📄 OPT paper: https://arxiv.org/abs/2205.01068
👾 Code: https://github.com/facebookresearch/metaseq
📒 Logbook: https://github.com/facebookresearch/metaseq/blob/main/projects/OPT/chronicles/OPT175B_Logbook.pdf
✍️ OPT Official Blog Post: https://ai.facebook.com/blog/democratizing-access-to-large-scale-language-models-with-opt-175b/
OpenAI Embeddings API: https://openai.com/blog/introducing-text-and-code-embeddings/
Nils Reimers' critique of OpenAI Embeddings API: https://medium.com/@nils_reimers/openai-gpt-3-text-embeddings-really-a-new-state-of-the-art-in-dense-text-embeddings-6571fe3ec9d9
Timestamps:
00:00 Introduction and housekeeping: new feedback form, ACL conference highlights
02:42 The convergence between NLP and Neural IR techniques
06:43 Open Pretrained Transformer motivation and scope, reproducing GPT-3 and open-sourcing
08:16 Basics of OPT: architecture, pre-training objective, teacher forcing, tokenizer, training data
13:40 Preliminary experiments findings: hyperparameters, training stability, spikiness
20:08 Problems that appear at scale when training with 992 GPUs
23:01 Using temperature to check whether GPUs are working
25:00 Training the largest model: what to do when the loss explodes? (which happens quite often)
29:15 When they switched away from AdamW to SGD
32:00 Results: successful but not quite GPT-3 level.
Toxicity? 35:45 Replicability of Large Language Models research. Was GPT-3 replicable? What difference does it make?
37:25 What makes a paper replicable?
40:33 Directions in which large Language Models are applied to Information Retrieval
45:15 Final thoughts and takeaways
21 פרקים
כל הפרקים
×
1 AGI vs ASI: The future of AI-supported decision making with Louis Rosenberg 54:42

1 EXAONE 3.0: An Expert AI for Everyone (with Hyeongu Yun) 24:57

1 Zeta-Alpha-E5-Mistral: Finetuning LLMs for Retrieval (with Arthur Câmara) 19:35

1 ColPali: Document Retrieval with Vision-Language Models only (with Manuel Faysse) 34:48

1 Using LLMs in Information Retrieval (w/ Ronak Pradeep) 22:15

1 Designing Reliable AI Systems with DSPy (w/ Omar Khattab) 59:57

1 The Power of Noise (w/ Florin Cuconasu) 11:45

1 Benchmarking IR Models (w/ Nandan Thakur) 21:55

1 Baking the Future of Information Retrieval Models 27:05

1 Hacking JIT Assembly to Build Exascale AI Infrastructure 38:04

1 The Promise of Language Models for Search: Generative Information Retrieval 1:07:31

1 Task-aware Retrieval with Instructions 1:11:13

1 Generating Training Data with Large Language Models w/ Special Guest Marzieh Fadaee 1:16:14

1 ColBERT + ColBERTv2: late interaction at a reasonable inference cost 57:30

1 Evaluating Extrapolation Performance of Dense Retrieval: How does DR compare to cross encoders when it comes to generalization? 58:30

1 Open Pre-Trained Transformer Language Models (OPT): What does it take to train GPT-3? 47:12

1 Few-Shot Conversational Dense Retrieval (ConvDR) w/ special guest Antonios Krasakis 1:23:11

1 Transformer Memory as a Differentiable Search Index: memorizing thousands of random doc ids works!? 1:01:40

1 Learning to Retrieve Passages without Supervision: finally unsupervised Neural IR? 59:10

1 The Curse of Dense Low-Dimensional Information Retrieval for Large Index Sizes 54:13

1 Shallow Pooling for Sparse Labels: the shortcomings of MS MARCO 1:07:17
ברוכים הבאים אל Player FM!
Player FM סורק את האינטרנט עבור פודקאסטים באיכות גבוהה בשבילכם כדי שתהנו מהם כרגע. זה יישום הפודקאסט הטוב ביותר והוא עובד על אנדרואיד, iPhone ואינטרנט. הירשמו לסנכרון מנויים במכשירים שונים.