Artwork

תוכן מסופק על ידי Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
Player FM - אפליקציית פודקאסט
התחל במצב לא מקוון עם האפליקציה Player FM !

Exploring the NIST AI Risk Management Framework (RMF) with Patrick Hall

41:24
 
שתפו
 

Manage episode 431386597 series 3475282
תוכן מסופק על ידי Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.

Join us as we chat with Patrick Hall, Principal Scientist at Hallresearch.ai and Assistant Professor at George Washington University. He shares his insights on the current state of AI, its limitations, and the potential risks associated with it. The conversation also touched on the importance of responsible AI, the role of the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) in adoption, and the implications of using generative AI in decision-making.
Show notes
Governance, model explainability, and high-risk applications 00:00:03

The benefits of NIST AI Risk Management Framework 00:04:01

  • Does not have a profit motive, which avoids the potential for conflicts of interest when providing guidance on responsible AI.
  • Solicits, adjudicates, and incorporates feedback from the public and other stakeholders.
  • NIST is not law, however it's recommendations set companies up for outcome-based reviews by regulators.

Accountability challenges in "blame-free" cultures 00:10:24

  • Cites these cultures have the hardest time with the framework's recommendations
  • Practices like documentation and fair model reviews need accountability and objectivity
  • If everyone's responsible, no one's responsible.

The value of explainable models vs black-box models 00:15:00

  • Concerns about replacing explainable models with LLMs for LLM's sake
  • Why generative AI is bad for decision-making

AI and its impact on students 00:21:49

  • Students are more indicative of where the hype and market is today
  • Teaching them how to work through the best model for the best job despite the hype

AI incidents and contextual failures 00:26:17

Generative AI and homogenization problems 00:34:30
Recommended resources from Patrick:

What did you think? Let us know.

Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

  • LinkedIn - Episode summaries, shares of cited articles, and more.
  • YouTube - Was it something that we said? Good. Share your favorite quotes.
  • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
  continue reading

פרקים

1. Exploring the NIST AI Risk Management Framework (RMF) with Patrick Hall (00:00:00)

2. Governance, model explainability, and high-risk applications (00:00:03)

3. The benefits of NIST AI RMF (00:04:01)

4. Accountability challenges in "blame-free" cultures (00:10:24)

5. The value of explainable models vs black box models (00:15:00)

6. AI and its impact on students (00:21:49)

7. AI incidents and contextual failures (00:26:17)

8. Generative AI and homogenization concerns (00:34:30)

24 פרקים

Artwork
iconשתפו
 
Manage episode 431386597 series 3475282
תוכן מסופק על ידי Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.

Join us as we chat with Patrick Hall, Principal Scientist at Hallresearch.ai and Assistant Professor at George Washington University. He shares his insights on the current state of AI, its limitations, and the potential risks associated with it. The conversation also touched on the importance of responsible AI, the role of the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) in adoption, and the implications of using generative AI in decision-making.
Show notes
Governance, model explainability, and high-risk applications 00:00:03

The benefits of NIST AI Risk Management Framework 00:04:01

  • Does not have a profit motive, which avoids the potential for conflicts of interest when providing guidance on responsible AI.
  • Solicits, adjudicates, and incorporates feedback from the public and other stakeholders.
  • NIST is not law, however it's recommendations set companies up for outcome-based reviews by regulators.

Accountability challenges in "blame-free" cultures 00:10:24

  • Cites these cultures have the hardest time with the framework's recommendations
  • Practices like documentation and fair model reviews need accountability and objectivity
  • If everyone's responsible, no one's responsible.

The value of explainable models vs black-box models 00:15:00

  • Concerns about replacing explainable models with LLMs for LLM's sake
  • Why generative AI is bad for decision-making

AI and its impact on students 00:21:49

  • Students are more indicative of where the hype and market is today
  • Teaching them how to work through the best model for the best job despite the hype

AI incidents and contextual failures 00:26:17

Generative AI and homogenization problems 00:34:30
Recommended resources from Patrick:

What did you think? Let us know.

Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

  • LinkedIn - Episode summaries, shares of cited articles, and more.
  • YouTube - Was it something that we said? Good. Share your favorite quotes.
  • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
  continue reading

פרקים

1. Exploring the NIST AI Risk Management Framework (RMF) with Patrick Hall (00:00:00)

2. Governance, model explainability, and high-risk applications (00:00:03)

3. The benefits of NIST AI RMF (00:04:01)

4. Accountability challenges in "blame-free" cultures (00:10:24)

5. The value of explainable models vs black box models (00:15:00)

6. AI and its impact on students (00:21:49)

7. AI incidents and contextual failures (00:26:17)

8. Generative AI and homogenization concerns (00:34:30)

24 פרקים

כל הפרקים

×
 
Loading …

ברוכים הבאים אל Player FM!

Player FM סורק את האינטרנט עבור פודקאסטים באיכות גבוהה בשבילכם כדי שתהנו מהם כרגע. זה יישום הפודקאסט הטוב ביותר והוא עובד על אנדרואיד, iPhone ואינטרנט. הירשמו לסנכרון מנויים במכשירים שונים.

 

מדריך עזר מהיר