Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular ...
…
continue reading

1
Karl Friston - Why Intelligence Can't Get Too Large (Goldilocks principle)
1:21:39
1:21:39
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
1:21:39In this episode, hosts Tim and Keith finally realize their long-held dream of sitting down with their hero, the brilliant neuroscientist Professor Karl Friston. The conversation is a fascinating and mind-bending journey into Professor Friston's life's work, the Free Energy Principle, and what it reveals about life, intelligence, and consciousness i…
…
continue reading

1
The Day AI Solves My Puzzles Is The Day I Worry (Prof. Cristopher Moore)
1:34:52
1:34:52
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
1:34:52We are joined by Cristopher Moore, a professor at the Santa Fe Institute with a diverse background in physics, computer science, and machine learning. The conversation begins with Cristopher, who calls himself a "frog" explaining that he prefers to dive deep into specific, concrete problems rather than taking a high-level "bird's-eye view". They ex…
…
continue reading

1
Michael Timothy Bennett: Defining Intelligence and AGI Approaches
1:05:44
1:05:44
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
1:05:44Dr. Michael Timothy Bennett is a computer scientist who's deeply interested in understanding artificial intelligence, consciousness, and what it means to be alive. He's known for his provocative paper "What the F*** is Artificial Intelligence" which challenges conventional thinking about AI and intelligence.**SPONSOR MESSAGES***Prolific: Quality da…
…
continue reading

1
Superintelligence Strategy (Dan Hendrycks)
1:45:38
1:45:38
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
1:45:38Deep dive with Dan Hendrycks, a leading AI safety researcher and co-author of the "Superintelligence Strategy" paper with former Google CEO Eric Schmidt and Scale AI CEO Alexandr Wang. *** SPONSOR MESSAGES Gemini CLI is an open-source AI agent that brings the power of Gemini directly into your terminal - https://github.com/google-gemini/gemini-cli …
…
continue reading

1
DeepMind Genie 3 [World Exclusive] (Jack Parker Holder, Shlomi Fruchter)
58:22
58:22
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
58:22This episode features Shlomi Fuchter and Jack Parker Holder from Google DeepMind, who are unveiling a new AI called Genie 3. The host, Tim Scarfe, describes it as the most mind-blowing technology he has ever seen. We were invited to their offices to conduct the interview (not sponsored).Imagine you could create a video game world just by describing…
…
continue reading

1
Large Language Models and Emergence: A Complex Systems Perspective (Prof. David C. Krakauer)
49:48
49:48
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
49:48Prof. David Krakauer, President of the Santa Fe Institute argues that we are fundamentally confusing knowledge with intelligence, especially when it comes to AI. He defines true intelligence as the ability to do more with less—to solve novel problems with limited information. This is contrasted with current AI models, which he describes as doing le…
…
continue reading

1
Pushing compute to the limits of physics
1:23:32
1:23:32
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
1:23:32Dr. Maxwell Ramstead grills Guillaume Verdon (AKA “Beff Jezos”) who's the founder of Thermodynamic computing startup Extropic. Guillaume shares his unique path – from dreaming about space travel as a kid to becoming a physicist, then working on quantum computing at Google, to developing a radically new form of computing hardware for machine learnin…
…
continue reading

1
The Fractured Entangled Representation Hypothesis (Kenneth Stanley, Akarsh Kumar)
2:16:22
2:16:22
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
2:16:22Are the AI models you use today imposters? Please watch the intro video we did before this: https://www.youtube.com/watch?v=o1q6Hhz0MAg In this episode, hosts Dr. Tim Scarfe and Dr. Duggar are joined by AI researcher Prof. Kenneth Stanley and MIT PhD student Akash Kumar to discuss their fascinating paper, "Questioning Representational Optimism in D…
…
continue reading

1
The Fractured Entangled Representation Hypothesis (Intro)
15:45
15:45
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
15:45What if today's incredible AI is just a brilliant "impostor"? This episode features host Dr. Tim Scarfe in conversation with guests Prof. Kenneth Stanley (ex-OpenAI), Dr. Keith Duggar (MIT), and Arkash Kumar (MIT).While AI today produces amazing results on the surface, its internal understanding is a complete mess, described as "total spaghetti" [0…
…
continue reading

1
Three Red Lines We're About to Cross Toward AGI (Daniel Kokotajlo, Gary Marcus, Dan Hendrycks)
2:07:07
2:07:07
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
2:07:07What if the most powerful technology in human history is being built by people who openly admit they don't trust each other? In this explosive 2-hour debate, three AI experts pull back the curtain on the shocking psychology driving the race to Artificial General Intelligence—and why the people building it might be the biggest threat of all. Kokotaj…
…
continue reading

1
How AI Learned to Talk and What It Means - Prof. Christopher Summerfield
1:08:28
1:08:28
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
1:08:28We interview Professor Christopher Summerfield from Oxford University about his new book "These Strange New Minds: How AI Learned to Talk and What It". AI learned to understand the world just by reading text - something scientists thought was impossible. You don't need to see a cat to know what one is; you can learn everything from words alone. Thi…
…
continue reading

1
"Blurring Reality" - Chai's Social AI Platform (SPONSORED)
50:59
50:59
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
50:59"Blurring Reality" - Chai's Social AI Platform - sponsored This episode of MLST explores the groundbreaking work of Chai, a social AI platform that quietly built one of the world's largest AI companion ecosystems before ChatGPT's mainstream adoption. With over 10 million active users and just 13 engineers serving 2 trillion tokens per day, Chai dis…
…
continue reading

1
Google AlphaEvolve - Discovering new science (exclusive interview)
1:13:58
1:13:58
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
1:13:58Today GoogleDeepMind released AlphaEvolve: a Gemini coding agent for algorithm discovery. It beat the famous Strassen algorithm for matrix multiplication set 56 years ago. Google has been killing it recently. We had early access to the paper and interviewed the researchers behind the work. AlphaEvolve: A Gemini-powered coding agent for designing ad…
…
continue reading

1
Prof. Randall Balestriero - LLMs without pretraining and SSL
34:30
34:30
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
34:30Randall Balestriero joins the show to discuss some counterintuitive findings in AI. He shares research showing that huge language models, even when started from scratch (randomly initialized) without massive pre-training, can learn specific tasks like sentiment analysis surprisingly well, train stably, and avoid severe overfitting, sometimes matchi…
…
continue reading

1
How Machines Learn to Ignore the Noise (Kevin Ellis + Zenna Tavares)
1:16:55
1:16:55
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
1:16:55Prof. Kevin Ellis and Dr. Zenna Tavares talk about making AI smarter, like humans. They want AI to learn from just a little bit of information by actively trying things out, not just by looking at tons of data. They discuss two main ways AI can "think": one way is like following specific rules or steps (like a computer program), and the other is mo…
…
continue reading

1
Eiso Kant (CTO poolside) - Superhuman Coding Is Coming!
1:36:28
1:36:28
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
1:36:28Eiso Kant, CTO of poolside AI, discusses the company's approach to building frontier AI foundation models, particularly focused on software development. Their unique strategy is reinforcement learning from code execution feedback which is an important axis for scaling AI capabilities beyond just increasing model size or data volume. Kant predicts h…
…
continue reading

1
The Compendium - Connor Leahy and Gabriel Alfour
1:37:10
1:37:10
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
1:37:10Connor Leahy and Gabriel Alfour, AI researchers from Conjecture and authors of "The Compendium," joinus for a critical discussion centered on Artificial Superintelligence (ASI) safety and governance. Drawing from their comprehensive analysis in "The Compendium," they articulate a stark warning about the existential risks inherent in uncontrolled AI…
…
continue reading

1
ARC Prize v2 Launch! (Francois Chollet and Mike Knoop)
54:15
54:15
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
54:15We are joined by Francois Chollet and Mike Knoop, to launch the new version of the ARC prize! In version 2, the challenges have been calibrated with humans such that at least 2 humans could solve each task in a reasonable task, but also adversarially selected so that frontier reasoning models can't solve them. The best LLMs today get negligible per…
…
continue reading

1
Test-Time Adaptation: the key to reasoning with DL (Mohamed Osman)
1:03:36
1:03:36
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
1:03:36Mohamed Osman joins to discuss MindsAI's highest scoring entry to the ARC challenge 2024 and the paradigm of test-time fine-tuning. They explore how the team, now part of Tufa Labs in Zurich, achieved state-of-the-art results using a combination of pre-training techniques, a unique meta-learning strategy, and an ensemble voting mechanism. Mohamed e…
…
continue reading

1
GSMSymbolic paper - Iman Mirzadeh (Apple)
1:11:23
1:11:23
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
1:11:23Iman Mirzadeh from Apple, who recently published the GSM-Symbolic paper discusses the crucial distinction between intelligence and achievement in AI systems. He critiques current AI research methodologies, highlighting the limitations of Large Language Models (LLMs) in reasoning and knowledge representation. SPONSOR MESSAGES: *** Tufa AI Labs is a …
…
continue reading

1
Reasoning, Robustness, and Human Feedback in AI - Max Bartolo (Cohere)
1:23:11
1:23:11
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
1:23:11Dr. Max Bartolo from Cohere discusses machine learning model development, evaluation, and robustness. Key topics include model reasoning, the DynaBench platform for dynamic benchmarking, data-centric AI development, model training challenges, and the limitations of human feedback mechanisms. The conversation also covers technical aspects like influ…
…
continue reading

1
Tau Language: The Software Synthesis Future (sponsored)
1:41:19
1:41:19
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
1:41:19This sponsored episode features mathematician Ohad Asor discussing logical approaches to AI, focusing on the limitations of machine learning and introducing the Tau language for software development and blockchain tech. Asor argues that machine learning cannot guarantee correctness. Tau allows logical specification of software requirements, automat…
…
continue reading

1
John Palazza - Vice President of Global Sales @ CentML ( sponsored)
54:50
54:50
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
54:50John Palazza from CentML joins us in this sponsored interview to discuss the critical importance of infrastructure optimization in the age of Large Language Models and Generative AI. We explore how enterprises can transition from the innovation phase to production and scale, highlighting the significance of efficient GPU utilization and cost manage…
…
continue reading

1
Transformers Need Glasses! - Federico Barbero
1:00:54
1:00:54
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
1:00:54Federico Barbero (DeepMind/Oxford) is the lead author of "Transformers Need Glasses!". Have you ever wondered why LLMs struggle with seemingly simple tasks like counting or copying long strings of text? We break down the theoretical reasons behind these failures, revealing architectural bottlenecks and the challenges of maintaining information fide…
…
continue reading

1
Sakana AI - Chris Lu, Robert Tjarko Lange, Cong Lu
1:37:54
1:37:54
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
1:37:54We speak with Sakana AI, who are building nature-inspired methods that could fundamentally transform how we develop AI systems. The guests include Chris Lu, a researcher who recently completed his DPhil at Oxford University under Prof. Jakob Foerster's supervision, where he focused on meta-learning and multi-agent systems. Chris is the first author…
…
continue reading

1
Clement Bonnet - Can Latent Program Networks Solve Abstract Reasoning?
51:26
51:26
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
51:26Clement Bonnet discusses his novel approach to the ARC (Abstraction and Reasoning Corpus) challenge. Unlike approaches that rely on fine-tuning LLMs or generating samples at inference time, Clement's method encodes input-output pairs into a latent space, optimizes this representation with a search algorithm, and decodes outputs for new inputs. This…
…
continue reading

1
Prof. Jakob Foerster - ImageNet Moment for Reinforcement Learning?
53:31
53:31
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
53:31Prof. Jakob Foerster, a leading AI researcher at Oxford University and Meta, and Chris Lu, a researcher at OpenAI -- they explain how AI is moving beyond just mimicking human behaviour to creating truly intelligent agents that can learn and solve problems on their own. Foerster champions open-source AI for responsible, decentralised development. He…
…
continue reading

1
Daniel Franzen & Jan Disselhoff - ARC Prize 2024 winners
1:09:04
1:09:04
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
1:09:04Daniel Franzen and Jan Disselhoff, the "ARChitects" are the official winners of the ARC Prize 2024. Filmed at Tufa Labs in Zurich - they revealed how they achieved a remarkable 53.5% accuracy by creatively utilising large language models (LLMs) in new ways. Discover their innovative techniques, including depth-first search for token selection, test…
…
continue reading

1
Sepp Hochreiter - LSTM: The Comeback Story?
1:07:01
1:07:01
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
1:07:01Sepp Hochreiter, the inventor of LSTM (Long Short-Term Memory) networks – a foundational technology in AI. Sepp discusses his journey, the origins of LSTM, and why he believes his latest work, XLSTM, could be the next big thing in AI, particularly for applications like robotics and industrial simulation. He also shares his controversial perspective…
…
continue reading

1
Want to Understand Neural Networks? Think Elastic Origami! - Prof. Randall Balestriero
1:18:10
1:18:10
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
1:18:10Professor Randall Balestriero joins us to discuss neural network geometry, spline theory, and emerging phenomena in deep learning, based on research presented at ICML. Topics include the delayed emergence of adversarial robustness in neural networks ("grokking"), geometric interpretations of neural networks via spline theory, and challenges in reco…
…
continue reading

1
Nicholas Carlini (Google DeepMind)
1:21:15
1:21:15
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
1:21:15Nicholas Carlini from Google DeepMind offers his view of AI security, emergent LLM capabilities, and his groundbreaking model-stealing research. He reveals how LLMs can unexpectedly excel at tasks like chess and discusses the security pitfalls of LLM-generated code. SPONSOR MESSAGES: *** CentML offers competitive pricing for GenAI model deployment,…
…
continue reading

1
Subbarao Kambhampati - Do o1 models search?
1:32:13
1:32:13
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
1:32:13Join Prof. Subbarao Kambhampati and host Tim Scarfe for a deep dive into OpenAI's O1 model and the future of AI reasoning systems. * How O1 likely uses reinforcement learning similar to AlphaGo, with hidden reasoning tokens that users pay for but never see * The evolution from traditional Large Language Models to more sophisticated reasoning system…
…
continue reading

1
How Do AI Models Actually Think? - Laura Ruis
1:18:01
1:18:01
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
1:18:01Laura Ruis, a PhD student at University College London and researcher at Cohere, explains her groundbreaking research into how large language models (LLMs) perform reasoning tasks, the fundamental mechanisms underlying LLM reasoning capabilities, and whether these models primarily rely on retrieval or develop procedural knowledge. SPONSOR MESSAGES:…
…
continue reading

1
Jurgen Schmidhuber on Humans co-existing with AIs
1:12:50
1:12:50
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
1:12:50Jürgen Schmidhuber, the father of generative AI, challenges current AI narratives, revealing that early deep learning work is in his opinion misattributed, where it actually originated in Ukraine and Japan. He discusses his early work on linear transformers and artificial curiosity which preceded modern developments, shares his expansive vision of …
…
continue reading

1
Yoshua Bengio - Designing out Agency for Safe AI
1:41:53
1:41:53
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
1:41:53Professor Yoshua Bengio is a pioneer in deep learning and Turing Award winner. Bengio talks about AI safety, why goal-seeking “agentic” AIs might be dangerous, and his vision for building powerful AI tools without giving them agency. Topics include reward tampering risks, instrumental convergence, global AI governance, and how non-agent AIs could r…
…
continue reading

1
Francois Chollet - ARC reflections - NeurIPS 2024
1:26:46
1:26:46
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
1:26:46François Chollet discusses the outcomes of the ARC-AGI (Abstraction and Reasoning Corpus) Prize competition in 2024, where accuracy rose from 33% to 55.5% on a private evaluation set. SPONSOR MESSAGES: *** CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale de…
…
continue reading

1
Jeff Clune - Agent AI Needs Darwin
2:00:13
2:00:13
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
2:00:13AI professor Jeff Clune ruminates on open-ended evolutionary algorithms—systems designed to generate novel and interesting outcomes forever. Drawing inspiration from nature’s boundless creativity, Clune and his collaborators aim to build “Darwin Complete” search spaces, where any computable environment can be simulated. By harnessing the power of l…
…
continue reading

1
Neel Nanda - Mechanistic Interpretability (Sparse Autoencoders)
3:42:36
3:42:36
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
3:42:36Neel Nanda, a senior research scientist at Google DeepMind, leads their mechanistic interpretability team. In this extensive interview, he discusses his work trying to understand how neural networks function internally. At just 25 years old, Nanda has quickly become a prominent voice in AI research after completing his pure mathematics degree at Ca…
…
continue reading

1
Jonas Hübotter (ETH) - Test Time Inference
1:45:56
1:45:56
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
1:45:56Jonas Hübotter, PhD student at ETH Zurich's Institute for Machine Learning, discusses his groundbreaking research on test-time computation and local learning. He demonstrates how smaller models can outperform larger ones by 30x through strategic test-time computation and introduces a novel paradigm combining inductive and transductive learning appr…
…
continue reading

1
How AI Could Be A Mathematician's Co-Pilot by 2026 (Prof. Swarat Chaudhuri)
1:44:42
1:44:42
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
1:44:42Professor Swarat Chaudhuri from the University of Texas at Austin and visiting researcher at Google DeepMind discusses breakthroughs in AI reasoning, theorem proving, and mathematical discovery. Chaudhuri explains his groundbreaking work on COPRA (a GPT-based prover agent), shares insights on neurosymbolic approaches to AI. Professor Swarat Chaudhu…
…
continue reading

1
Nora Belrose - AI Development, Safety, and Meaning
2:29:50
2:29:50
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
2:29:50Nora Belrose, Head of Interpretability Research at EleutherAI, discusses critical challenges in AI safety and development. The conversation begins with her technical work on concept erasure in neural networks through LEACE (LEAst-squares Concept Erasure), while highlighting how neural networks' progression from simple to complex learning patterns c…
…
continue reading

1
Why Your GPUs are underutilised for AI - CentML CEO Explains
2:08:40
2:08:40
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
2:08:40Prof. Gennady Pekhimenko (CEO of CentML, UofT) joins us in this *sponsored episode* to dive deep into AI system optimization and enterprise implementation. From NVIDIA's technical leadership model to the rise of open-source AI, Pekhimenko shares insights on bridging the gap between academic research and industrial applications. Learn about "dark si…
…
continue reading

1
Eliezer Yudkowsky and Stephen Wolfram on AI X-risk
4:18:30
4:18:30
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
4:18:30Eliezer Yudkowsky and Stephen Wolfram discuss artificial intelligence and its potential existen‑ tial risks. They traversed fundamental questions about AI safety, consciousness, computational irreducibility, and the nature of intelligence. The discourse centered on Yudkowsky’s argument that advanced AI systems pose an existential threat to humanity…
…
continue reading

1
Pattern Recognition vs True Intelligence - Francois Chollet
2:42:54
2:42:54
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
2:42:54Francois Chollet, a prominent AI expert and creator of ARC-AGI, discusses intelligence, consciousness, and artificial intelligence. Chollet explains that real intelligence isn't about memorizing information or having lots of knowledge - it's about being able to handle new situations effectively. This is why he believes current large language models…
…
continue reading

1
The Elegant Math Behind Machine Learning - Anil Ananthaswamy
1:53:11
1:53:11
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
1:53:11Anil Ananthaswamy is an award-winning science writer and former staff writer and deputy news editor for the London-based New Scientist magazine. Machine learning systems are making life-altering decisions for us: approving mortgage loans, determining whether a tumor is cancerous, or deciding if someone gets bail. They now influence developments and…
…
continue reading

1
Michael Levin - Why Intelligence Isn't Limited To Brains.
1:03:35
1:03:35
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
1:03:35Professor Michael Levin explores the revolutionary concept of diverse intelligence, demonstrating how cognitive capabilities extend far beyond traditional brain-based intelligence. Drawing from his groundbreaking research, he explains how even simple biological systems like gene regulatory networks exhibit learning, memory, and problem-solving abil…
…
continue reading

1
Speechmatics CTO - Next-Generation Speech Recognition
1:46:23
1:46:23
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
1:46:23Will Williams is CTO of Speechmatics in Cambridge. In this sponsored episode - he shares deep technical insights into modern speech recognition technology and system architecture. The episode covers several key technical areas: * Speechmatics' hybrid approach to ASR, which focusses on unsupervised learning methods, achieving comparable results with…
…
continue reading

1
Dr. Sanjeev Namjoshi - Active Inference
2:45:32
2:45:32
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
2:45:32Dr. Sanjeev Namjoshi, a machine learning engineer who recently submitted a book on Active Inference to MIT Press, discusses the theoretical foundations and practical applications of Active Inference, the Free Energy Principle (FEP), and Bayesian mechanics. He explains how these frameworks describe how biological and artificial systems maintain stab…
…
continue reading

1
Joscha Bach - Why Your Thoughts Aren't Yours.
1:52:45
1:52:45
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
1:52:45Dr. Joscha Bach discusses advanced AI, consciousness, and cognitive modeling. He presents consciousness as a virtual property emerging from self-organizing software patterns, challenging panpsychism and materialism. Bach introduces "Cyberanima," reinterpreting animism through information processing, viewing spirits as self-organizing software agent…
…
continue reading

1
Decompiling Dreams: A New Approach to ARC? - Alessandro Palmarini
51:34
51:34
נגן מאוחר יותר
נגן מאוחר יותר
רשימות
לייק
אהבתי
51:34Alessandro Palmarini is a post-baccalaureate researcher at the Santa Fe Institute working under the supervision of Melanie Mitchell. He completed his undergraduate degree in Artificial Intelligence and Computer Science at the University of Edinburgh. Palmarini's current research focuses on developing AI systems that can efficiently acquire new skil…
…
continue reading