Player FM - Internet Radio Done Right
19 subscribers
Checked 18d ago
הוסף לפני two שנים
תוכן מסופק על ידי Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
Player FM - אפליקציית פודקאסט
התחל במצב לא מקוון עם האפליקציה Player FM !
התחל במצב לא מקוון עם האפליקציה Player FM !
Why AI Fundamentals? | AI rigor in engineering | Generative AI isn't new | Data quality matters in machine learning
Manage episode 363073993 series 3475282
תוכן מסופק על ידי Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
The AI Fundamentalists - Ep1
Summary
- Welcome to the first episode. 0:03
- Welcome to the first episode of the AI Fundamentalists podcast.
- Introducing the hosts.
- Introducing Sid and Andrew. 1:23
- Introducing Andrew Clark, co-founder and CTO of Monitaur.
- Introduction of the podcast topic.
- What is the proper rigorous process for using AI in manufacturing? 3:44
- Large language models and AI.
- Rigorous systems for manufacturing and innovation.
- Predictive maintenance as an example of manufacturing. 6:28
- Predictive maintenance and predictive maintenance in manufacturing.
- The Apollo program and the Apollo program.
- The key things you can see when you’re new to running. 8:31
- The importance of taking a step back.
- Getting past the plateau in software engineering.
- What’s the game changer in these generative models? 10:47
- Can Chat-GPT become a lawyer, doctor, or teacher?
- The inflection point with generative models.
- How can we put guardrails in place for these systems so they know when to not answer? 13:46
- How to put guardrails in place for these systems.
- The concept of multiple constraints.
- Generative AI isn’t new, it’s embedded in our daily lives. 16:20
- Generative AI is not new, but not a new technology.
- Examples of generative AI.
- The importance of data in machine learning. 19:01
- The fundamental building blocks of machine learning.
- AI is revolutionary, but it's been around for years.
- What can AI learn from systems engineering? 20:59
- Nasa Apollo program, systems engineering.
- Systems engineering fundamentals world, rigor, testing and validating.
- Understanding the why, data and holistic systems management.
- The AI curmudgeons, the AI fundamentalists.
What did you think? Let us know.
Good AI Needs Great GovernanceDefine, manage, and automate your AI model governance lifecycle from policy to proof.
Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.
Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:
- LinkedIn - Episode summaries, shares of cited articles, and more.
- YouTube - Was it something that we said? Good. Share your favorite quotes.
- Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
32 פרקים
Manage episode 363073993 series 3475282
תוכן מסופק על ידי Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
The AI Fundamentalists - Ep1
Summary
- Welcome to the first episode. 0:03
- Welcome to the first episode of the AI Fundamentalists podcast.
- Introducing the hosts.
- Introducing Sid and Andrew. 1:23
- Introducing Andrew Clark, co-founder and CTO of Monitaur.
- Introduction of the podcast topic.
- What is the proper rigorous process for using AI in manufacturing? 3:44
- Large language models and AI.
- Rigorous systems for manufacturing and innovation.
- Predictive maintenance as an example of manufacturing. 6:28
- Predictive maintenance and predictive maintenance in manufacturing.
- The Apollo program and the Apollo program.
- The key things you can see when you’re new to running. 8:31
- The importance of taking a step back.
- Getting past the plateau in software engineering.
- What’s the game changer in these generative models? 10:47
- Can Chat-GPT become a lawyer, doctor, or teacher?
- The inflection point with generative models.
- How can we put guardrails in place for these systems so they know when to not answer? 13:46
- How to put guardrails in place for these systems.
- The concept of multiple constraints.
- Generative AI isn’t new, it’s embedded in our daily lives. 16:20
- Generative AI is not new, but not a new technology.
- Examples of generative AI.
- The importance of data in machine learning. 19:01
- The fundamental building blocks of machine learning.
- AI is revolutionary, but it's been around for years.
- What can AI learn from systems engineering? 20:59
- Nasa Apollo program, systems engineering.
- Systems engineering fundamentals world, rigor, testing and validating.
- Understanding the why, data and holistic systems management.
- The AI curmudgeons, the AI fundamentalists.
What did you think? Let us know.
Good AI Needs Great GovernanceDefine, manage, and automate your AI model governance lifecycle from policy to proof.
Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.
Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:
- LinkedIn - Episode summaries, shares of cited articles, and more.
- YouTube - Was it something that we said? Good. Share your favorite quotes.
- Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
32 פרקים
כל הפרקים
×T
The AI Fundamentalists

1 Mechanism design: Building smarter AI agents from the fundamentals, Part 1 37:06
37:06
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי37:06
What if we've been approaching AI agents all wrong? While the tech world obsesses over larger language models (LLMs) and prompt engineering, there'a a foundational approach that could revolutionize how we build trustworthy AI systems: mechanism design. This episode kicks off an exciting series where we're building AI agents "the hard way"—using principles from game theory and microeconomics to create systems with predictable, governable behavior. Rather than hoping an LLM can magically handle complex multi-step processes like booking travel, Sid and Andrew explore how to design the rules of the game so that even self-interested agents produce optimal outcomes. Drawing from our conversation with Dr. Michael Zargum ( Episode 32 ), we break down why LLM-based agents struggle with transparency and governance. The "surface area" for errors expands dramatically when you can't explain how decisions are made across multiple steps. Instead, mechanism design creates clear states with defined optimization parameters at each stage—making the entire system more reliable and accountable. We explore the famous Prisoner's Dilemma to illustrate how individual incentives can work against collective benefits without proper system design. Then we introduce the Vickrey-Clark-Groves mechanism, which ensures AI agents truthfully reveal preferences and actively participate in multi-step processes—critical properties for enterprise applications. Beyond technical advantages, this approach offers something profound: a way to preserve humanity in increasingly automated systems. By explicitly designing for values, fairness, and social welfare, we're not just building better agents—we're ensuring AI serves human needs rather than replacing human thought. Subscribe now to follow our journey as we build an agentic travel system from first principles, applying these concepts to real business challenges. Have questions about mechanism design for AI? Send them our way for future episodes! What did you think? Let us know. Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.…
T
The AI Fundamentalists

1 Principles, agents, and the chain of accountability in AI systems 46:26
46:26
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי46:26
Dr. Michael Zargham provides a systems engineering perspective on AI agents, emphasizing accountability structures and the relationship between principals who deploy agents and the agents themselves. In this episode, he brings clarity to the often misunderstood concept of agents in AI by grounding them in established engineering principles rather than treating them as mysterious or elusive entities. Show highlights • Agents should be understood through the lens of the principal-agent relationship, with clear lines of accountability • True validation of AI systems means ensuring outcomes match intentions, not just optimizing loss functions • LLMs by themselves are "high-dimensional word calculators," not agents - agents are more complex systems with LLMs as components • Guardrails provide deterministic constraints ("musts" or "shalls") versus constitutional AI's softer guidance ("shoulds") • Systems engineering approaches from civil engineering and materials science offer valuable frameworks for AI development • Authority and accountability must align - people shouldn't be held responsible for systems they don't have authority to control • The transition from static input-output to closed-loop dynamical systems represents the shift toward truly agentic behavior • Robust agent systems require both exploration (lab work) and exploitation (hardened deployment) phases with different standards Explore Dr. Zargham's work Protocols and Institutions (Feb 27, 2025) Comments Submitted by BlockScience, University of Washington APL Information Risk and Synthetic Intelligence Research Initiative (IRSIRI), Cognitive Security and Education Forum (COGSEC), and the Active Inference Institute (AII) to the Networking and Information Technology Research and Development National Coordination Office's Request for Comment on The Creation of a National Digital Twins R&D Strategic Plan NITRD-2024-13379 (Aug 8, 2024) What did you think? Let us know. Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.…
T
The AI Fundamentalists

1 Supervised machine learning for science with Christoph Molnar and Timo Freiesleben, Part 2 41:58
41:58
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי41:58
Part 2 of this series could have easily been renamed "AI for science: The expert’s guide to practical machine learning.” We continue our discussion with Christoph Molnar and Timo Freiesleben to look at how scientists can apply supervised machine learning techniques from the previous episode into their research. Introduction to supervised ML for science (0:00) Welcome back to Christoph Molnar and Timo Freiesleben, co-authors of “ Supervised Machine Learning for Science: How to Stop Worrying and Love Your Black Box ” The model as the expert? (1:00) Evaluation metrics have profound downstream effects on all modeling decisions Data augmentation offers a simple yet powerful way to incorporate domain knowledge Domain expertise is often undervalued in data science despite being crucial Measuring causality: Metrics and blind spots (10:10) Causality approaches in ML range from exploring associations to inferring treatment effects Connecting models to scientific understanding (18:00) Interpretation methods must stay within realistic data distributions to yield meaningful insights Robustness across distribution shifts (26:40) Robustness requires understanding what distribution shifts affect your model Pre-trained models and transfer learning provide promising paths to more robust scientific ML Reproducibility challenges in ML and science (35:00) Reproducibility challenges differ between traditional science and machine learning Go back to listen to part one of this series for the conceptual foundations that support these practical applications. Check out Christoph and Timo's book “ Supervised Machine Learning for Science: How to Stop Worrying and Love Your Black Box ” available online now. What did you think? Let us know. Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.…
T
The AI Fundamentalists

1 Supervised machine learning for science with Christoph Molnar and Timo Freiesleben, Part 1 27:29
27:29
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי27:29
Machine learning is transforming scientific research across disciplines, but many scientists remain skeptical about using approaches that focus on prediction over causal understanding. That’s why we are excited to have Christoph Molnar return to the podcast with Timo Freiesleben. They are co-authors of " Supervised Machine Learning for Science: How to Stop Worrying and Love your Black Box ." We will talk about the perceived problems with automation in certain sciences and find out how scientists can use machine learning without losing scientific accuracy. • Different scientific disciplines have varying goals beyond prediction, including control, explanation, and reasoning about phenomena • Traditional scientific approaches build models from simple to complex, while machine learning often starts with complex models • Scientists worry about using ML due to lack of interpretability and causal understanding • ML can both integrate domain knowledge and test existing scientific hypotheses • "Shortcut learning" occurs when models find predictive patterns that aren't meaningful • Machine learning adoption varies widely across scientific fields • Ecology and medical imaging have embraced ML, while other fields remain cautious • Future directions include ML potentially discovering scientific laws humans can understand • Researchers should view machine learning as another tool in their scientific toolkit Stay tuned! In part 2, we'll shift the discussion with Christoph and Timo to talk about putting these concepts into practice. What did you think? Let us know. Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.…
T
The AI Fundamentalists

1 The future of AI: Exploring modeling paradigms 33:42
33:42
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי33:42
Unlock the secrets to AI's modeling paradigms. We emphasize the importance of modeling practices, how they interact, and how they should be considered in relation to each other before you act. Using the right tool for the right job is key. We hope you enjoy these examples of where the greatest AI and machine learning techniques exist in your routine today. More AI agent disruptors (0:56) Proxy from London start-up Convergence AI Another hit to OpenAI, this product is available for free, unlike OpenAI’s Operator . AI Paris Summit - What's next for regulation? (4:40) [Vice President] Vance tells Europeans that heavy regulation can kill AI US federal administration withdrawing from the previous trend of sweeping big tech regulation on modeling systems. The EU is pushing to reduce bureaucracy but not regulatory pressure Modeling paradigms explained (10:33) As companies look for an edge in high-stakes computations, we’ve seen best-in-class rediscovering expert system-based techniques that, with modern computing power, are breathing new light into them. Paradigm 1: Agents (11:23) Paradigm 2: Generative (14:26) Paradigm 3: Mathematical optimization (regression) (18:33) Paradigm 4: Predictive (classification) (23:19) Paradigm 5: Control theory (24:37) The right modeling paradigm for the job? (28:05) What did you think? Let us know. Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.…
T
The AI Fundamentalists

Agentic AI is the latest foray into big-bet promises for businesses and society at large. While promising autonomy and efficiency, AI agents raise fundamental questions about their accuracy, governance, and the potential pitfalls of over-reliance on automation. Does this story sound vaguely familiar? Hold that thought. This discussion about the over-under of certain promises is for you. Show Notes The economics of LLMs and DeepSeek R1 (00:00:03) Reviewing recent developments in AI technologies and their implications Discussing the impact of DeepSeek’s R1 model on the AI landscape, NVIDIA The origins of agentic AI (00:07:12) Status quo of AI models to date: Is big tech backing away from promise of generative AI ? Agentic AI designed to perceive, reason, act, and learn Governance and agentic AI (00:13:12) Examining the tension between cost efficiency and performance risks [ LangChain State of AI Agents Report ] Highlighting governance concerns related to AI agents Issues with agentic AI implementation (00:21:01) Considering the limitations of AI agents and their adoption in the workplace Analyzing real-world experiments with AI agent technologies, like Devin What's next for complex and agentic AI systems (00:29:27) Offering insights on the cautious integration of these systems in business practices Encouraging a thoughtful approach to leveraging AI capabilities for measurable outcomes What did you think? Let us know. Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.…
T
The AI Fundamentalists

1 Contextual integrity and differential privacy: Theory vs. application with Sebastian Benthall 32:32
32:32
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי32:32
What if privacy could be as dynamic and socially aware as the communities it aims to protect? Sebastian Benthall, a senior research fellow from NYU’s Information Law Institute, shows us how privacy is complex. He uses Helen Nissenbaum’s work with contextual integrity and concepts in differential privacy to explain the complexity of privacy. Our talk explains how privacy is not just about protecting data but also about following social rules in different situations, from healthcare to education. These rules can change privacy regulations in big ways. Show notes Intro: Sebastian Benthall (0:03) Research: Designing Fiduciary Artificial Intelligence (Benthall, Shekman) Integrating Differential Privacy and Contextual Integrity (Benthall, Cummings) Exploring differential privacy and contextual integrity (1:05) Discussion about the origins of each subject How are differential privacy and contextual integrity used to enforce each other? Accepted context or legitimate context? (9:33) Does context develop from what society accepts over time? Approaches to determine situational context and legitimacy Next steps in contextual integrity (13:35) Is privacy as we know it ending? Areas where integrated differential privacy and contextual integrity can help (Cummings) Interpretations of differential privacy (14:30) Not a silver bullet New questions posed from NIST about its application Privacy determined by social norms (20:25) Game theory and its potential for understanding social norms Agents and governance: what will ultimately decide privacy? (25:27) Voluntary disclosures and the biases it can present towards groups that are least concerned with privacy Avoiding self-fulfilling prophecy from data and context What did you think? Let us know. Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.…
T
The AI Fundamentalists

1 Model documentation: Beyond model cards and system cards in AI governance 27:43
27:43
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי27:43
What if the secret to successful AI governance lies in understanding the evolution of model documentation? In this episode, our hosts challenge the common belief that model cards marked the start of documentation in AI. We explore model documentation practices, from their crucial beginnings in fields like finance to their adaptation in Silicon Valley. Our discussion also highlights the important role of early modelers and statisticians in advocating for a complete approach that includes the entire model development lifecycle. Show Notes Model documentation origins and best practices (1:03) Documenting a model is a comprehensive process that requires giving users and auditors clear understanding: Why was the model built? What data goes into a model? How is the model implemented? What does the model output? Model cards - pros and cons (7:33) Model cards for model reporting , Association for Computing Machinery Evolution from this research to Google's definition to today How the market perceives them vs. what they are Why the analogy “nutrition labels for models” needs a closer look System cards - pros and cons (12:03) To their credit, OpenAI system cards somewhat bridge the gap between proper model documentation and a model card. Contains complex descriptions of evaluation methodologies along with results; extra points for reporting red-teaming results Represents 3rd-party opinions of the social and ethical implications of the release of the model Automating model documentation with generative AI (17:17) Finding the balance in automation in a great governance strategy Generative AI can provide an assist in editing and personal workflow Improving documentation for AI governance (23:11) As model expert, engage from the beginning with writing the bulk of model documentation by hand. The exercise of documenting your models solidifies your understanding of the model's goals, values, and methods for the business What did you think? Let us know. Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.…
T
The AI Fundamentalists

1 New paths in AI: Rethinking LLMs and model risk strategies 39:51
39:51
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי39:51
Are businesses ready for large language models as a path to AI? In this episode, the hosts reflect on the past year of what has changed and what hasn’t changed in the world of LLMs. Join us as we debunk the latest myths and emphasize the importance of robust risk management in AI integration. The good news is that many decisions about adoption have forced businesses to discuss their future and impact in the face of emerging technology. You won't want to miss this discussion. Intro and news: The veto of California's AI Safety Bill (00:00:03) Can state-specific AI regulations really protect consumers, or do they risk stifling innovation? ( Gov. Newsome's response ) Veto highlights the critical need for risk-based regulations that don't rely solely on the size and cost of language models Arguments to be made for a cohesive national framework that ensures consistent AI regulation across the United States Are businesses ready to embrace large language models, or are they underestimating the challenges? (00:08:35) The myth that acquiring a foundational model is a quick fix for productivity woes The essential role of robust risk management strategies, especially in sensitive sectors handling personal data Review of model cards, Open AI's system cards , and the importance of thorough testing, validation, and stricter regulations to prevent a false sense of security Transparency alone is not enough; objective assessments are crucial for genuine progress in AI integration From hallucinations in language models to ethical energy use, we tackle some of the most pressing problems in AI today (00:16:29) Reinforcement learning with annotators and the controversial use of other models for review Jan LeCun's energy systems and retrieval-augmented generation (RAG) offer intriguing alternatives that could reshape modeling approaches The ethics of advancing AI technologies, consider the parallels with past monumental achievements and the responsible allocation of resources (00:26:49) There is good news about developments and lessons learned from LLMs; but there is also a long way to go. Our original predictions in episode 2 for LLMs still reigns true: “Reasonable expectations of LLMs: Where truth matters and risk tolerance is low, LLMs will not be a good fit” With increased hype and awareness from LLMs came varying levels of interest in how all model types and their impacts are governed in a business. What did you think? Let us know. Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.…
T
The AI Fundamentalists

1 Complex systems: What data science can learn from astrophysics with Rachel Losacco 41:02
41:02
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי41:02
Our special guest, astrophysicist Rachel Losacco, explains the intricacies of galaxies, modeling, and the computational methods that unveil their mysteries. She shares stories about how advanced computational resources enable scientists to decode galaxy interactions over millions of years with true-to-life accuracy. Sid and Andrew discuss transferable practices for building resilient modeling systems. Prologue: Why it's important to bring stats back [00:00:03] Announcement from the American Statistical Association (ASA): Data Science Statement Updated to Include “ and AI” Today's guest: Rachel Losacco [00:02:10] Rachel is an astrophysicist who’s worked with major galaxy formation simulations for many years. She hails from Leiden (Lie-den) University and the University of Florida. As a Senior Data Scientist, she works on modeling road safety. Defining complex systems through astrophysics [00:02:52] Discussion about origins and adoption of complex systems Difficulties with complex systems: Nonlinearity, chaos and randomness, collective dynamics and hierarchy, and emergence. Complexities of nonlinear systems [00:08:20] Linear models (Least Squares, GLMs, SVMs) can be incredibly powerful but they cannot model all possible functions (e.g. a decision boundary of concentric circles) Non-linearity and how it exists in the natural world Chaos and randomness [00:11:30] Enter references to Jurassic Park and The Butterfly Effect “In universe simulations, a change to a single parameter can govern if entire galaxy clusters will ever form” - Rachel Collective dynamics and hierarchy [00:15:45] Interactions between agents don’t occur globally and often is mediated through effects that only happen on specific sub-scales Adaptation: components of systems breaking out of linear relationships between inputs and outputs to better serve the function of the greater system Emergence and complexity [00:23:36] New properties arise from the system that cannot be explained by the base rules governing the system Examples in astrophysics [00:24:34] These difficulties are parts of solving previously impossible problems Consider this lecture from IIT Delhi on Complex Systems to get a sense of what is required to study and formalize a complex system and its collective dynamics ( https://www.youtube.com/watch?v=yJ39ppgJlf0 ) Consciousness and reasoning from a new point of view [00:31:45] Non-linearity, hierarchy, feedback loops, and emergence may be ways to study consciousness. The brain is a complex system that a simple set of rules cannot fully define. See: Brain modeling from scratch of C. Elgans What did you think? Let us know. Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.…
T
The AI Fundamentalists

1 Preparing AI for the unexpected: Lessons from recent IT incidents 34:13
34:13
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי34:13
Can your AI models survive a big disaster? While a recent major IT incident with CrowdStrike wasn't AI related, the magnitude and reaction reminded us that no system no matter how proven is immune to failure. AI modeling systems are no different. Neglecting the best practices of building models can lead to unrecoverable failures. Discover how the three-tiered framework of robustness, resiliency, and anti-fragility can guide your approach to creating AI infrastructures that not only perform reliably under stress but also fail gracefully when the unexpected happens. Show Notes Technology, incidents, and why basics matter (00:00:03) While the recent Crowdstrike incident wasn't caused by AI, it's impact was a wakeup call for people and processes that support critical systems As AI is increasingly being used at both experimental and production levels, we can expect AI incidents are a matter of if, not when. What can you do to prepare? The "7P's": Are you capable of handling the unexpected? (00:09:05) The 7Ps is an adage, dating back to WWII, that aligns with our "do things the hard way" approach to AI governance and modeling systems. Let’s consider the levels of building a performant system: Robustness , Resiliency , and Antifragility Model robustness (00:10:03) Robustness is a very important but often overlooked component of building modeling systems. We suspect that part of the problem is due to: The Kaggle-driven upbringing of data scientists Assumed generalizability of modeling systems, when models are optimized to perform well on their training data but do not generalize enough to perform well on unseen data. Model resilience (00:16:10) Resiliency is the ability to absorb adverse stimuli without destruction and return to its pre-event state. In practice, robustness and resiliency, testing, and planning are often easy components to leave out. This is where risks and threats are exposed. See also, Episode 8. Model validation: Robustness and resilience Models and antifragility (00:25:04) Unlike resiliency, which is the ability to absorb damaging inputs without breaking, antifragility is the ability of a system to improve from challenging stimuli. (i.e. the human body) A key question we need to ask ourselves if we are not actively building our AI systems to be antifragile, why are we using AI systems at all? What did you think? Let us know. Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.…
T
The AI Fundamentalists

1 Exploring the NIST AI Risk Management Framework (RMF) with Patrick Hall 41:24
41:24
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי41:24
Join us as we chat with Patrick Hall, Principal Scientist at Hallresearch.ai and Assistant Professor at George Washington University. He shares his insights on the current state of AI, its limitations, and the potential risks associated with it. The conversation also touched on the importance of responsible AI, the role of the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) in adoption, and the implications of using generative AI in decision-making. Show notes Governance, model explainability, and high-risk applications 00:00:03 Intro to Patrick His latest book: Machine Learning for High-Risk Applications: Approaches to Responsible AI (2023) The benefits of NIST AI Risk Management Framework 00:04:01 Does not have a profit motive, which avoids the potential for conflicts of interest when providing guidance on responsible AI. Solicits, adjudicates, and incorporates feedback from the public and other stakeholders. NIST is not law, however it's recommendations set companies up for outcome-based reviews by regulators . Accountability challenges in "blame-free" cultures 00:10:24 Cites these cultures have the hardest time with the framework's recommendations Practices like documentation and fair model reviews need accountability and objectivity If everyone's responsible, no one's responsible. The value of explainable models vs black-box models 00:15:00 Concerns about replacing explainable models with LLMs for LLM's sake Why generative AI is bad for decision-making AI and its impact on students 00:21:49 Students are more indicative of where the hype and market is today Teaching them how to work through the best model for the best job despite the hype AI incidents and contextual failures 00:26:17 AI Incident Database AI, as it currently stands, is a memorizing and calculating technology. It lacks the ability to incorporate real-world context. McDonald's AI Drive-Thru debacle is a warning to us all Generative AI and homogenization problems 00:34:30 Recommended resources from Patrick: Ed Zitron “ Better Offline ” NIST ARIA AI Safety Is a Narrative Problem What did you think? Let us know. Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.…
T
The AI Fundamentalists

1 Data lineage and AI: Ensuring quality and compliance with Matt Barlin 28:29
28:29
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי28:29
Ready to uncover the secrets of modern systems engineering and the future of AI? Join us for an enlightening conversation with Matt Barlin, the Chief Science Officer of Valence. Matt's extensive background in systems engineering and data lineage sets the stage for a fascinating discussion. He sheds light on the historical evolution of the field, the critical role of documentation, and the early detection of defects in complex systems. This episode promises to expand your understanding of model-based systems and data issues, offering valuable insights that only an expert of Matt's caliber can provide. In the heart of our episode, we dive into the fundamentals and transformative benefits of data lineage in AI. Matt draws intriguing parallels between data lineage and the engineering life cycle, stressing the importance of tracking data origins, access rights, and verification processes. Discover how decentralized identifiers are paving the way for individuals to control and monetize their own data. With the phasing out of third-party cookies and the challenges of human-generated training data shortages, we explore how systems like retrieval-augmented generation (RAG) and compliance regulations like the EU AI Act are shaping the landscape of AI data quality and compliance. Don’t miss this thought-provoking episode that promises to keep you at the forefront of responsible AI. What did you think? Let us know. Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.…
T
The AI Fundamentalists

1 Differential privacy: Balancing data privacy and utility in AI 28:17
28:17
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי28:17
Explore the basics of differential privacy and its critical role in protecting individual anonymity. The hosts explain the latest guidelines and best practices in applying differential privacy to data for models such as AI. Learn how this method also makes sure that personal data remains confidential, even when datasets are analyzed or hacked. Show Notes Intro and AI news (00:00) Google AI search tells users to glue pizza and eat rocks Gary Marcus on break? (Maybe and X only break) What is differential privacy? (06:34) Differential privacy is a process for sensitive data anonymization that offers each individual in a dataset the same privacy they would experience if they were removed from the dataset entirely. NIST’s recent paper SP 800-226 IPD: “Any privacy harms that result form a differentially private analysis could have happened if you had not contributed your data”. There are two main types of differential privacy: global (NIST calls it Central) and local Why should people care about differential privacy? (11:30) Interest has been increasing for organizations to intentionally and systematically prioritize the privacy and safety of user data Speed up deployments of AI systems for enterprise customers since connections to raw data do not need to be established Increase data security for customers that utilize sensitive data in their modeling systems Minimize the risk of sensitive data exposure for your data privileges - i.e. Don’t be THAT organization Guidelines and resources for applied differential privacy Guidelines for Evaluating Differential Privacy Guarantees: NIST De-Identification Practical examples of applied differential privacy (15:58) Continuous Features - cite: Dwork, McSherry, Nissim, and Smith’s 2006 seminal paper "Calibrating Noise to Sensitivity in Private Data Analysis”[2], introduces a concept called ε-differential privacy Categorical Features - cite: Warner (1965) created a randomized response technique in his paper titled: “Randomized Response: A Survey Technique for Eliminating Evasive Answer Bias” Summary and key takeaways (23:59) Differential privacy is going to be a part of how many of us need to manage data privacy Data providers can’t provide us with anonymized data for analysis or when anonymization isn’t enough for our privacy needs Hopeful that cohort targeting takes over for individual targeting Remember: Differential privacy does not prevent bias! What did you think? Let us know. Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.…
T
The AI Fundamentalists

1 Responsible AI: Does it help or hurt innovation? With Anthony Habayeb 45:59
45:59
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי45:59
Artificial Intelligence (AI) stands at a unique intersection of technology, ethics, and regulation. The complexities of responsible AI are brought into sharp focus in this episode featuring Anthony Habayeb, CEO and co-founder of Monitaur, As responsible AI is scrutinized for its role in profitability and innovation, Anthony and our hosts discuss the imperatives of safe and unbiased modeling systems, the role of regulations, and the importance of ethics in shaping AI. Show notes Prologue: Why responsible AI? Why now? (00:00:00) Deviating from our normal topics about modeling best practices Context about where regulation plays a role in industries besides big tech Can we learn from other industries about the role of "responsibility" in products? Special guest, Anthony Habayeb (00:02:59) Introductions and start of the discussion Of all the companies you could build around AI, why governance? Is responsible AI the right phrase? (00:11:20) Should we even call good modeling and business practices "responsible AI"? Is having responsible AI a “want to have?” or a “need to have?” Importance of AI regulation and responsibility (00:14:49) People in the AI and regulation worlds have started pushing back on Responsible AI. Do regulations impede freedom? Discussing the big picture of responsibility and governance: Explainability, repeatability, records, and audit What about bias and fairness? (00:22:40) You can have fair models that operate with bias Bias in practice identifies inequities that models have learned Fairness is correcting for societal biases to level the playing field for safer business and modeling practices to prevail. Responsible deployment and business management (00:35:10) Discussion about what organizations get right about responsible AI And what organizations can get completely wrong if they aren't careful. Embracing responsible AI practices (00:41:15) Getting your teams, companies, and individuals involved in the movement towards building AI responsibly What did you think? Let us know. Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.…
ברוכים הבאים אל Player FM!
Player FM סורק את האינטרנט עבור פודקאסטים באיכות גבוהה בשבילכם כדי שתהנו מהם כרגע. זה יישום הפודקאסט הטוב ביותר והוא עובד על אנדרואיד, iPhone ואינטרנט. הירשמו לסנכרון מנויים במכשירים שונים.