Player FM - Internet Radio Done Right
286 subscribers
Checked 4M ago
הוסף לפני eight שנים
תוכן מסופק על ידי NLP Highlights and Allen Institute for Artificial Intelligence. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי NLP Highlights and Allen Institute for Artificial Intelligence או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
Player FM - אפליקציית פודקאסט
התחל במצב לא מקוון עם האפליקציה Player FM !
התחל במצב לא מקוון עם האפליקציה Player FM !
פודקאסטים ששווה להאזין
בחסות
Y
Young and Profiting with Hala Taha (Entrepreneurship, Sales, Marketing)


1 Mel Robbins: The Let Them Theory, Build a Business and Life on Your Terms | Human Behavior | E329 1:16:55
1:16:55
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי1:16:55
On her son’s prom night, Mel Robbins fussed over details that didn’t matter. Her daughter grabbed her arm and said, “Let them. Let them run in the rain. Let them eat where they want. Let them.” Those two simple words hit Mel like a ton of bricks and completely changed how she thinks about control. In this episode, Mel shares some of the pivotal moments that shaped her career, her innovative strategies for overcoming adversity, and how the Let Them Theory can help you navigate business challenges, strengthen relationships, and unlock your true power. In this episode, Hala and Mel will discuss: (00:00) Introduction (02:57) The Power of Action (04:22) Mel's Unforgettable TED Talk Debut (07:00) The 5 Second Rule (07:52) Building Unshakable Confidence (12:04) Turning Adversity into Strength (22:00) The Power of Showing Up for Others (30:40) Why Details Matter in Business (42:32) Understanding the Let Them Theory (51:14) The Let Them Theory in Business Mel Robbins is a motivational speaker, the host of The Mel Robbins Podcast, and a bestselling author of several influential books, including her latest, The Let Them Theory. Known for her groundbreaking 5 Second Rule, she has helped millions of people take action and transform their lives. With 30 million views, her TEDx talk made her a recognized voice in behavior change. Mel is also the CEO of 143 Studios, a female-driven media company creating award-winning content for top brands like LinkedIn and Audible. She is a Forbes 50 Over 50 Honoree and one of USA Today’s Top 5 Mindset Coaches. Resources Mentioned: Mel’s Books: The Let Them Theory: A Life-Changing Tool That Millions of People Can't Stop Talking About : https://amzn.to/4h6quLh The 5 Second Rule: Transform your Life, Work, and Confidence with Everyday Courage : https://amzn.to/3WdAgTX Sponsored By: OpenPhone - Get 20% off 6 months at https://www.openphone.com/PROFITING Shopify - Sign up for a one-dollar-per-month trial period at https://www.youngandprofiting.co/shopify Airbnb - Your home might be worth more than you think. Find out how much at https://www.airbnb.com/host Rocket Money - Cancel your unwanted subscriptions and reach your financial goals faster with Rocket Money. Go to https://www.rocketmoney.com/profiting Indeed - Get a $75 job credit at indeed.com/profiting Active Deals - youngandprofiting.com/deals Key YAP Links Reviews - ratethispodcast.com/yap Youtube - youtube.com/c/YoungandProfiting LinkedIn - linkedin.com/in/htaha/ Instagram - instagram.com/yapwithhala/ Social + Podcast Services : yapmedia.com Transcripts - youngandprofiting.com/episodes-new All Show Keywords: Entrepreneurship, entrepreneurship podcast, Business, Business podcast, Self Improvement, Self-Improvement, Personal development, Starting a business, Strategy, Investing, Sales, Selling, Psychology, Productivity, Entrepreneurs, AI, Artificial Intelligence, Technology, Marketing, Negotiation, Money, Finance, Side hustle, Startup, mental health, Career, Leadership, Mindset, Health, Growth mindset. Career & Entrepreneurship Career, Success, Entrepreneurship, Productivity, Careers, Startup, Entrepreneurs, Business Ideas, Growth Hacks, Career Development, Money Management, Opportunities, Professionals, Workplace, Career podcast, Entrepreneurship podcast…
109 - What Does Your Model Know About Language, with Ellie Pavlick
Manage episode 257394627 series 1452120
תוכן מסופק על ידי NLP Highlights and Allen Institute for Artificial Intelligence. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי NLP Highlights and Allen Institute for Artificial Intelligence או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
How do we know, in a concrete quantitative sense, what a deep learning model knows about language? In this episode, Ellie Pavlick talks about two broad directions to address this question: structural and behavioral analysis of models. In structural analysis, we often train a linear classifier for some linguistic phenomenon we'd like to probe (e.g., syntactic dependencies) while using the (frozen) weights of a model pre-trained on some tasks (e.g., masked language models). What can we conclude from the results of probing experiments? What does probing tell us about the linguistic abstractions encoded in each layer of an end-to-end pre-trained model? How well does it match classical NLP pipelines? How important is it to freeze the pre-trained weights in probing experiments? In contrast, behavioral analysis evaluates a model's ability to distinguish between inputs which respect vs. violate a linguistic phenomenon using acceptability or entailment tasks, e.g., can the model predict which is more likely: "dog bites man" vs. "man bites dog"? We discuss the significance of which format to use for behavioral tasks, and how easy it is for humans to perform such tasks. Ellie Pavlick's homepage: https://cs.brown.edu/people/epavlick/ BERT rediscovers the classical nlp pipeline , by Ian Tenney, Dipanjan Das, Ellie Pavlick https://arxiv.org/pdf/1905.05950.pdf?fbclid=IwAR3gzFibSBoDGdjqVu9Gq0mh1lDdRZa7dm42JuXXUfjG6rKZ44iHIOdV6jg Inherent Disagreements in Human Textual Inferences by Ellie Pavlick and Tom Kwiatkowski https://www.mitpressjournals.org/doi/full/10.1162/tacl_a_00293
…
continue reading
145 פרקים
Manage episode 257394627 series 1452120
תוכן מסופק על ידי NLP Highlights and Allen Institute for Artificial Intelligence. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי NLP Highlights and Allen Institute for Artificial Intelligence או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
How do we know, in a concrete quantitative sense, what a deep learning model knows about language? In this episode, Ellie Pavlick talks about two broad directions to address this question: structural and behavioral analysis of models. In structural analysis, we often train a linear classifier for some linguistic phenomenon we'd like to probe (e.g., syntactic dependencies) while using the (frozen) weights of a model pre-trained on some tasks (e.g., masked language models). What can we conclude from the results of probing experiments? What does probing tell us about the linguistic abstractions encoded in each layer of an end-to-end pre-trained model? How well does it match classical NLP pipelines? How important is it to freeze the pre-trained weights in probing experiments? In contrast, behavioral analysis evaluates a model's ability to distinguish between inputs which respect vs. violate a linguistic phenomenon using acceptability or entailment tasks, e.g., can the model predict which is more likely: "dog bites man" vs. "man bites dog"? We discuss the significance of which format to use for behavioral tasks, and how easy it is for humans to perform such tasks. Ellie Pavlick's homepage: https://cs.brown.edu/people/epavlick/ BERT rediscovers the classical nlp pipeline , by Ian Tenney, Dipanjan Das, Ellie Pavlick https://arxiv.org/pdf/1905.05950.pdf?fbclid=IwAR3gzFibSBoDGdjqVu9Gq0mh1lDdRZa7dm42JuXXUfjG6rKZ44iHIOdV6jg Inherent Disagreements in Human Textual Inferences by Ellie Pavlick and Tom Kwiatkowski https://www.mitpressjournals.org/doi/full/10.1162/tacl_a_00293
…
continue reading
145 פרקים
All episodes
×Curious about the safety of LLMs? 🤔 Join us for an insightful new episode featuring Suchin Gururangan, Young Investigator at Allen Institute for Artificial Intelligence and Data Science Engineer at Appuri. 🚀 Don't miss out on expert insights into the world of LLMs!

1 "Imaginative AI" with Mohamed Elhoseiny 23:19
23:19
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי23:19
This podcast episode features Dr. Mohamed Elhoseiny, a true luminary in the realm of computer vision with over a decade of groundbreaking research. As an Assistant Professor at KAUST, Dr. Elhoseiny's work delves into the intersections of Computer Vision, Language & Vision, and Computational Creativity in Art, Fashion, and AI. Notably, he co-organized the 1st and 2nd Workshops on Closing the Loop between Vision and Language, demonstrating his commitment to advancing interdisciplinary research. With a rich educational background from Stanford University's Graduate School of Business Ignite Program, and Rutgers University as MS/PhD Researcher, coupled with influential stints at Stanford, Baidu Research, Facebook AI Research, Adobe Research, and SRI International, Dr. Elhoseiny brings a wealth of experience to our discussion.…

1 142 - Science Of Science, with Kyle Lo 48:57
48:57
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי48:57
Our first guest with this new format is Kyle Lo, the most senior lead scientist in the Semantic Scholar team at Allen Institute for AI (AI2), who kindly agreed to share his perspective on #Science of #Science (#scisci) on our podcast. SciSci is concerned with studying how people do science, and includes developing methods and tools to help people consume AND produce science. Kyle has made several critical contributions in this field which enabled a lot of SciSci work over the past 5+ years, ranging from novel NLP methods (eg, SciBERT https://lnkd.in/gTP_tYiF ), to open data collections (eg, S2ORK https://lnkd.in/g4J6tXCG), to toolkits for manipulating scientific documents (eg, PaperMage https://lnkd.in/gwU7k6mJ which JUST received a Best Paper Award 🏆 at EMNLP 2023). Kyle Lo's homepage: https://kyleclo.github.io/…

1 141 - Building an open source LM, with Iz Beltagy and Dirk Groeneveld 29:36
29:36
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי29:36
In this special episode of NLP Highlights, we discussed building and open sourcing language models. What is the usual recipe for building large language models? What does it mean to open source them? What new research questions can we answer by open sourcing them? We particularly focused on the ongoing Open Language Model (OLMo) project at AI2, and invited Iz Beltagy and Dirk Groeneveld, the research and engineering leads of the OLMo project to chat. Blog post announcing OLMo: https://blog.allenai.org/announcing-ai2-olmo-an-open-language-model-made-by-scientists-for-scientists-ab761e4e9b76 Organizations interested in partnership can express their interest here: https://share.hsforms.com/1blFWEWJ2SsysSXFUEJsxuA3ioxm You can find Iz at twitter.com/i_beltagy and Dirk at twitter.com/mechanicaldirk…

1 140 - Generative AI and Copyright, with Chris Callison-Burch 51:28
51:28
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי51:28
In this special episode, we chatted with Chris Callison-Burch about his testimony in the recent U.S. Congress Hearing on the Interoperability of AI and Copyright Law. We started by asking Chris about the purpose and the structure of this hearing. Then we talked about the ongoing discussion on how the copyright law is applicable to content generated by AI systems, the potential risks generative AI poses to artists, and Chris’ take on all of this. We end the episode with a recording of Chris’ opening statement at the hearing.…

1 139 - Coherent Long Story Generation, with Kevin Yang 45:18
45:18
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי45:18
How can we generate coherent long stories from language models? Ensuring that the generated story has long range consistency and that it conforms to a high level plan is typically challenging. In this episode, Kevin Yang describes their system that prompts language models to first generate an outline, and iteratively generate the story while following the outline and reranking and editing the outputs for coherence. We also discussed the challenges involved in evaluating long generated texts. Kevin Yang is a PhD student at UC Berkeley. Kevin's webpage: https://people.eecs.berkeley.edu/~yangk/ Papers discussed in this episode: 1. Re3: Generating Longer Stories With Recursive Reprompting and Revision (https://www.semanticscholar.org/paper/Re3%3A-Generating-Longer-Stories-With-Recursive-and-Yang-Peng/2aab6ca1a8dae3f3db6d248231ac3fa4e222b30a) 2. DOC: Improving Long Story Coherence With Detailed Outline Control (https://www.semanticscholar.org/paper/DOC%3A-Improving-Long-Story-Coherence-With-Detailed-Yang-Klein/ef6c768f23f86c4aa59f7e859ca6ffc1392966ca)…

1 138 - Compositional Generalization in Neural Networks, with Najoung Kim 48:22
48:22
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי48:22
Compositional generalization refers to the capability of models to generalize to out-of-distribution instances by composing information obtained from the training data. In this episode we chatted with Najoung Kim, on how to explicitly evaluate specific kinds of compositional generalization in neural network models of language. Najoung described COGS, a dataset she built for this, some recent results in the space, and why we should be careful about interpreting the results given the current practice of pretraining models of lots of unlabeled text. Najoung's webpage: https://najoungkim.github.io/ Papers we discussed: 1. COGS: A Compositional Generalization Challenge Based on Semantic Interpretation (Kim et al., 2020): https://www.semanticscholar.org/paper/b20ddcbd239f3fa9acc603736ac2e4416302d074 2. Compositional Generalization Requires Compositional Parsers (Weissenhorn et al., 2022): https://www.semanticscholar.org/paper/557ebd17b7c7ac4e09bd167d7b8909b8d74d1153 3. Uncontrolled Lexical Exposure Leads to Overestimation of Compositional Generalization in Pretrained Models (Kim et al., 2022): https://www.semanticscholar.org/paper/8969ea3d254e149aebcfd1ffc8f46910d7cb160e Note that we referred to the final paper by an earlier name in the discussion.…

1 137 - Nearest Neighbor Language Modeling and Machine Translation, with Urvashi Khandelwal 35:56
35:56
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי35:56
We invited Urvashi Khandelwal, a research scientist at Google Brain to talk about nearest neighbor language and machine translation models. These models interpolate parametric (conditional) language models with non-parametric distributions over the closest values in some data stores built from relevant data. Not only are these models shown to outperform the usual parametric language models, they also have important implications on memorization and generalization in language models. Urvashi's webpage: https://urvashik.github.io Papers discussed: 1) Generalization through memorization: Nearest Neighbor Language Models (https://www.semanticscholar.org/paper/7be8c119dbe065c52125ee7716601751f3116844) 2)Nearest Neighbor Machine Translation (https://www.semanticscholar.org/paper/20d51f8e449b59c7e140f7a7eec9ab4d4d6f80ea)…

1 136 - Including Signed Languages in NLP, with Kayo Yin and Malihe Alikhani 1:02:15
1:02:15
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי1:02:15
In this episode, we talk with Kayo Yin, an incoming PhD at Berkeley, and Malihe Alikhani, an assistant professor at the University of Pittsburgh, about opportunities for the NLP community to contribute to Sign Language Processing (SLP). We talked about history and misconceptions about sign languages, high-level similarities and differences between spoken and sign languages, distinct linguistic features of signed languages, representations, computational resources, SLP tasks, and suggestions for better design and implementation of SLP models.…

1 135 - PhD Application Series: After Submitting Applications 36:53
36:53
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי36:53
This episode is the third in our current series on PhD applications. We talk about what the PhD application process looks like after applications are submitted. We start with a general overview of the timeline, then talk about how to approach interviews and conversations with faculty, and finish by discussing the different factors to consider in deciding between programs. The guests for this episode are Rada Mihalcea (Professor at the University of Michigan), Aishwarya Kamath (PhD student at NYU), and Sanjay Subramanian (PhD student at UC Berkeley). Homepages: - Aishwarya Kamath: https://ashkamath.github.io/ - Sanjay Subramanian: https://sanjayss34.github.io/ - Rada Mihalcea: https://web.eecs.umich.edu/~mihalcea/ The hosts for this episode are Alexis Ross and Nishant Subramani.…

1 134 - PhD Application Series: PhDs in Europe versus the US, with Barbara Plank and Gonçalo Correia 38:29
38:29
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי38:29
This episode is the second in our current series on PhD applications. How do PhD programs in Europe differ from PhD programs in the US, and how should people decide between them? In this episode, we invite Barbara Plank (Professor at ITU, IT University of Copenhagen) and Gonçalo Correia (ELLIS PhD student at University of Lisbon and University of Amsterdam) to share their perspectives on this question. We start by talking about the main differences between pursuing a PhD in Europe and the US. We then talk about the application requirements for European PhD programs and factors to consider when deciding whether to apply in Europe or the US. We conclude by talking about the ELLIS PhD program, a relatively new program for PhD students that facilitates collaborations across Europe. ELLIS PhD program: https://ellis.eu/phd-postdoc (Application Deadline: November 15, 2021) Homepages: - Barbara Plank: https://bplank.github.io/ - Gonçalo Correia: https://goncalomcorreia.github.io/ The hosts for this episode are Alexis Ross and Zhaofeng Wu.…

1 133 - PhD Application Series: Preparing Application Materials, with Nathan Schneider and Roma Patel 43:54
43:54
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי43:54
This episode is the first in our current series on PhD applications. How should people prepare their applications to PhD programs in NLP? In this episode, we invite Nathan Schneider (Professor of Linguistics and Computer Science at Georgetown University) and Roma Patel (PhD student in Computer Science at Brown University) to share their perspectives on preparing application materials. We start by talking about what factors should go into the decision to apply for PhD programs and how to gain relevant experience. We then talk about the most important parts of an application, focusing particularly on how to write a strong statement of purpose and choose recommendation letter writers. Blog posts mentioned in this episode: - Nathan Schneider’s Advice on Statements of Purpose: https://nschneid.medium.com/inside-ph-d-admissions-what-readers-look-for-in-a-statement-of-purpose-3db4e6081f80 - Student Perspectives on Applying to NLP PhD Programs: https://blog.nelsonliu.me/2019/10/24/student-perspectives-on-applying-to-nlp-phd-programs/ Homepages: - Nathan Schneider: https://people.cs.georgetown.edu/nschneid/ - Roma Patel: http://cs.brown.edu/people/rpatel59/ The hosts for this episode are Alexis Ross and Nishant Subramani.…

1 132 - Alexa Prize Socialbot Grand Challenge and Alquist 4.0, with Petr Marek 41:43
41:43
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי41:43
In this episode, we discussed the Alexa Prize Socialbot Grand Challenge and this year's winning submission, Alquist 4.0, with Petr Marek, a member of the winning team. Petr gave us an overview of their submission, the design choices that led to them winning the competition, including combining a hardcoded dialog tree and a neural generator model and extracting implicit personal information about users from their responses, and some outstanding challenges. Petr Marek is a PhD student at the Czech Technical University in Prague. More about the Alexa Prize challenges: https://developer.amazon.com/alexaprize Technical report on Alquist 4.0: https://arxiv.org/abs/2109.07968…

1 131 - Opportunities and Barriers between HCI and NLP, with Nanna Inie and Leon Derczynski 46:54
46:54
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי46:54
What can NLP researchers learn from Human Computer Interaction (HCI) research? We chatted with Nanna Inie and Leon Derczynski to find out. We discussed HCI's research processes including methods of inquiry, the data annotation processes used in HCI, and how they are different from NLP, and the cognitive methods used in HCI for qualitative error analyses. We also briefly talked about the opportunities the field of HCI presents for NLP researchers. This discussion is based on the following paper: https://aclanthology.org/2021.hcinlp-1.16/ Nanna Inie is a postdoctoral researcher and Leon Derczynski is an associate professor in CS at the IT University of Copenhagen. The hosts for this episode are Ana Marasović and Pradeep Dasigi.…

1 130 - Linking human cognitive patterns to NLP Models, with Lisa Beinborn 44:02
44:02
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי44:02
In this episode, we talk with Lisa Beinborn, an assistant professor at Vrije Universiteit Amsterdam, about how to use human cognitive signals to improve and analyze NLP models. We start by discussing different kinds of cognitive signals—eye-tracking, EEG, MEG, and fMRI—and challenges associated with using them. We then turn to Lisa’s recent work connecting interpretability measures with eye-tracking data, which reflect the relative importance measures of different tokens in human reading comprehension. We discuss empirical results suggesting that eye-tracking signals correlate strongly with gradient-based saliency measures, but not attention, in NLP methods. We conclude with discussion of the implications of these findings, as well as avenues for future work. Papers discussed in this episode: Towards best practices for leveraging human language processing signals for natural language processing: https://api.semanticscholar.org/CorpusID:219309655 Relative Importance in Sentence Processing: https://api.semanticscholar.org/CorpusID:235358922 Lisa Beinborn’s webpage: https://beinborn.eu/ The hosts for this episode are Alexis Ross and Pradeep Dasigi.…
ברוכים הבאים אל Player FM!
Player FM סורק את האינטרנט עבור פודקאסטים באיכות גבוהה בשבילכם כדי שתהנו מהם כרגע. זה יישום הפודקאסט הטוב ביותר והוא עובד על אנדרואיד, iPhone ואינטרנט. הירשמו לסנכרון מנויים במכשירים שונים.