Anthea Roberts on dragonfly thinking, integrating multiple perspectives, human-AI metacognition, and cognitive renaissance (AC Ep73)
Manage episode 454919821 series 3394253
“Not everyone can see with dragonfly eyes, but can we create tools that help enable people to see with dragonfly eyes?”
– Anthea Roberts
About Anthea Roberts
Anthea Roberts is Professor at the School of Regulation and Global Governance at the Australian National University (ANU) and a Visiting Professor at Harvard Law School. She is also the Founder, Director and CEO of Dragonfly Thinking. Her latest book, Six Faces of Globalization, was selected as one of the Best Books of 2021 by The Financial Times and Fortune Magazine. She has won numerous prestigious awards and has been named “The World’s Leading International Law Scholar” by the League of Scholars.
Website:
LinkedIn Profile:
University Profile:
What you will learn
- Exploring the concept of dragonfly thinking
- Creating tools to see complex problems through many lenses
- Shifting roles from generator to director and editor with AI
- Understanding metacognition in human-AI collaboration
- Addressing cultural biases in large language models
- Applying structured analytic techniques to real-world decisions
- Navigating the cognitive industrial revolution with AI
Episode Resources
People
Companies/Organizations
Books
- Is International Law International? by Anthea Roberts
- Six Faces of Globalization by Anthea Roberts
Technical Terms
Transcript
Ross Dawson: Anthea, it is a delight to have you on the show.
Anthea Roberts: Thank you very much for having me.
Ross: So you have a very interesting company called Dragonfly Thinking, and I’d like to delve into that and dive deep. But first of all, I’d like to hear the backstory of how you came to see the idea and create the company.
Anthea: Well, it’s probably an unusual route to creating a startup. I come with no technology background initially, and two years ago, if you told me I would start a tech startup, I would never have thought that was very likely—and no one around me would have, either.
My other hat that I wear when I’m not doing the company is as a professor of global governance at the Australian National University and a repeat visiting professor at Harvard. I’ve traditionally worked on international law, global governance, and, more recently, economics, security, and pushback against globalization.
I moved into a very interdisciplinary role, where I ended up doing a lot of work with different policymakers. Part of what I realized I was doing as I moved around these fields was creating something that the intelligence agencies call structured analytic techniques—techniques for understanding complex, ambiguous, evolving situations.
For instance, in my last book, I used one technique to understand the pushback against economic globalization through six narratives—looking at a complex problem from multiple sides. Another was a risk, reward, and resilience framework to integrate perspectives and make decisions. All of this, though, I had done completely analog.
Then the large language models came out. I was working with Sam Bide, a younger colleague who was more technically competent than I was. One day, he decided to teach one of my frameworks to ChatGPT. On a Saturday morning, he excitedly sent me a message saying, “That framework is really transferable!”
I replied, “I made it to be really transferable.”
He said, “No, no, it’s really transferable.”
We started going back and forth on this. At the time, Sam was moving into policy, and he created a persona called “Robo Anthea.” He and other policymakers would ask Robo Anthea questions. It had my published academic scholarship, but also my unpublished work.
At a very early stage, I had this confronting experience of having a digital twin. Some people asked, “Weren’t you horrified or worried about copyright infringement?” But I didn’t have that reaction. I thought it was amazingly interesting.
What could happen if you took structured techniques and worked with this extraordinary form of cognition? It allowed us to apply these techniques to areas I knew nothing about. It also let me hand this skill off to other people.
I leaned into it completely—on one condition: we changed the name from Robo Anthea to Dragonfly Thinking. It was both less creepy for me and a better metaphor. This way of seeing complex problems from many different sides is a dragonfly’s ability.
I think I’m a dragonfly, but I believe there are many dragonflies out there. I wanted to create a platform for this kind of thinking—where dragonflies could “swarm” around and develop ideas together
Ross: Just explain the dragonfly concept.
Anthea: We took the concept from some work done by Philip Tetlock. When the CIA wanted to determine who was best at understanding complex problems, they found that traditional experts performed poorly.
These experts tended to have one lens of analysis, which they overemphasized. This caused them to overlook some things and get blindsided by others.
In contrast, Tetlock found a group of individuals who were much better forecasters. They were incredibly diverse and 70% better than traditional experts—35% better than the CIA itself, even without access to classified material.
The one thing they had in common was that they saw the world through dragonfly eyes. Dragonfly eyes have thousands of lenses instead of one, allowing them to create an almost 360-degree view of reality. This predictive ability makes dragonflies some of the best predators in the world.
These qualities—seeing through multiple lenses, integrating perspectives, and stress-testing—are exactly what we need for complex problems.
- We need to see problems from many lenses: different perspectives, disciplines, and cognitive approaches.
- We must integrate this into a cohesive understanding to make decisions.
- We need to stress-test it by thinking about complex systems, dynamics, and future scenarios, so we can act with foresight despite uncertainty.
The AI part of this is critical because not everyone can see with dragonfly eyes. The question becomes: can we create tools to enable people to do so?
Ross: There are so many things I’d like to dive into, but just to get the big picture: this is obviously human-AI collaboration. These are complex problems where humans have the fullest context and decision-making ability, complemented by AI.
What does that interface look like? How do humans develop the skills to use AI effectively?
Anthea: I think this is one of the most interesting and evolving questions. In the kind of complex cognition we deal with, we aim to co-create with the LLMs as partners.
What I’ve noticed is that you shift roles. Instead of being the primary generator, you become the director or manager, deciding how you want the LLM to operate. You also take on a role as an editor or co-editor, moving back and forth.
This means humans stay in the loop but in a different way.
Another important aspect is recognizing where humans and AI excel. Not everyone is good at identifying when they’re better at a task versus when the AI is.
For instance, AI can hold a level of cognitive complexity that humans often cannot. In our risk, reward, and resilience framework, humans may overfocus on risk or reward. Some can hold the drivers of risk, reward, and resilience but can’t manage the interconnections.
AI can offload some of this cognitive load. The key is creating an interface that lets you focus on specific elements, cognitively “offload” them, and continue building.
This is just a partial clean-up to give you a sense of the process. Let me know if you’d like me to continue or make further adjustments!
Anthea: That’s not easy to do with a basic chat interface, for example. This is why I think the way we interact with LLMs—and the UI/UX—will evolve significantly. It’s about figuring out when the AI leads, when you lead, and how you co-create.
Something like GPT’s Canvas mode is a great example. It allows real-time editing and co-creation of individual sentences, which feels like a glimpse into where this technology is heading.
Ross: Yes, I completely agree on the metacognition aspect. That’s becoming central to my work—seeing your own cognition and recognizing the AI’s cognition as well. You need to pull back, observe the systemic cognition between humans and AI, and figure out how to allocate tasks effectively.
Anthea: I completely agree. Over the last year and a half, I’ve realized that almost all of my work is metacognitive. I rarely tell people what to think, but I have an ability to analyze how people think—how groups think, what paradigms they operate in, and where disagreements occur at higher levels of abstraction.
It turns out those second- and third-order abstractions about how to think are exactly what we can teach into these models and apply across many areas.
Initially, I thought I was just applying my own metacognitive approaches on top of the AI. Now I realize I also need a deep understanding of what’s happening inside the models themselves.
For instance, agentic workflows can introduce biases or particular ways of operating. You need cognitive awareness not just of your relationship with the AI but also of how the model itself operates.
Another challenge is managing the sheer volume of output from the AI. There’s often a deluge of information, and you have to practice discernment to avoid being overwhelmed.
Now, I’m also starting to think about how to simplify these tools so that people with different levels of cognitive complexity can easily access and use them. That’s where a product manager would come in—to streamline what I do and make it less intimidating for others.
If you combine this with interdisciplinary agents—looking at problems from different perspectives and working with experts—it’s metacognition layered on metacognition. I think this will be one of the defining challenges of our time: how we process this complexity without becoming overwhelmed or outsourcing too much of our thinking.
Ross: Yes, absolutely. As a startup, you do have to choose your audiences carefully. Focusing on highly complex problems makes sense because the value is so high, and it’s an underserved market.
On that note, I’m curious about the interfaces. Are you incorporating visual elements? Or is it primarily text-based, step-by-step interactions?
Anthea: I tend to be a highly visual and metaphorical thinker, so I’m drawn to visuals to help with this. Visual representations can often capture complex concepts more intuitively and succinctly than words.
We’re currently experimenting with ways to visually represent concepts like complex systems diagrams, interventions, causes, consequences, and effects.
I also think the idea of artifacts is crucial. You see this with tools like Claude, Canvas, and others. It’s about moving beyond a chat interface and creating something that can store, build upon, and expand ideas over time.
Another idea I’m exploring is “daemons” or personas—AI agents that act like specialists sitting on your shoulder. You could invoke an economics expert, a political science expert, or even a writing coach to give you critiques or perspectives.
This leads to new challenges, like saving and version control when collaborating not just with an AI but with other humans and their AIs. These are open questions, but I expect significant progress in the next few years as we move beyond the dominance of chat interfaces.
Ross: Harrison Chase, CEO of LangChain, talks about cognitive architectures, which I think aligns perfectly with what you’re doing. You’re creating systems where human and AI agents work together to enhance cognition.
Anthea: Exactly. I read a paper recently on metacognition—on knowing when humans make better decisions versus when the AI does. It showed that humans often made poor decisions about when to intervene, while the AI did better when deciding whether to involve humans.
That’s fascinating and shows how much work we need to do on understanding these architectures.
Ross: Are there any specific cognitive architecture archetypes you’re exploring or see potential in?
Anthea: I haven’t made as much progress on that yet, beyond observing the shift from humans being primary generators to directors and editors.
One thing I’ve been thinking about is how our culture celebrates certain roles—like the athlete on the field, the actor on stage, or the writer—while undervaluing the coach, director, or editor.
With AI, we’re moving into a world where the AI takes on those celebrated roles, and we become the coach, director, or editor. For instance, if you were creating an AI agent to represent a famous athlete, you wouldn’t ask the athlete to articulate their skills—they often can’t. You’d ask the coach.
Yet, culturally, we valorize the athlete, not the coach. This redistribution of roles will be fascinating to watch.
Similarly, we’ve historically overvalued STEM knowledge compared to the humanities and social sciences. Now we’re seeing a shift where those disciplines—like philosophy and argumentation—become crucial in the AI age.
Ross: Yes, absolutely. The framing and broader context are where humans shine, especially when AI has inherent limitations despite its generative capabilities.
Anthea: Exactly. AI models are generative, but they’re ultimately limited and contained. Humans bring the broader perspective, but we also get tired and cranky in ways the models don’t.
Ross: Earlier, you mentioned intelligence agencies as a core audience. How do their needs differ in terms of delivering these interfaces?
Anthea: We’re still in the early stages, with pilots launching early next year. I’ve worked with government agencies for a long time, so I know there are differences.
AI adoption in institutions is much slower than the technology itself. Governments and big enterprises are risk-averse, concerned about safety, transparency, and bias.
For intelligence agencies, I expect they’ll want models that are fully disconnected from the internet, with heightened security requirements.
I’m also fascinated by the Western and English-language biases in current frontier models. Down the track, I’d like to explore Chinese, Arabic, and French models to understand how different training data and reinforcement learning influence outcomes. This could enhance cross-cultural diplomacy, intelligence, and understanding.
We’re already seeing ideas like wisdom of the silicon crowd, where multiple models are combined for better predictions. But I think it’s not just about combining models—it’s about embracing their diverse cultural perspectives.
Ross: Yes, and I’ve seen papers on the biases in LLMs based on language and cultural training data. That’s such a fascinating and underexplored area.
Anthea: Absolutely. The first book I wrote, Is International Law International?, explored how international law isn’t uniform. Lawyers in China, Russia, and the US operate with different languages, universities, and assumptions.
We’re going to see the same thing with LLMs. Western and Chinese models may each have their own bell curves, but they’ll be very different. It’s a dynamic we haven’t fully grappled with yet.
Ross: And that interplay between polarization and convergence will be key.
Anthea: Exactly. Social media polarizes, creating barbells—hollowing out the middle and amplifying extremes. In contrast, LLMs tend to squash toward a bell curve, centering on the median.
Within a language area, LLMs can be anti-polarizing. But between language-based models, we’ll see significant polarization—different bell curves reinforcing different realities.
Understanding this interplay will be critical as we move forward.
Ross: This has been an incredible conversation, Anthea. What excites you most about the future—whether in your company, your work, or the world at large?
Anthea: I’ve fallen completely down the AI rabbit hole. As someone without a tech background, I now find myself reading AI papers constantly—it’s like a new enlightenment or cognitive industrial revolution.
The speed, scale, and cognitive extension AI enables are extraordinary. I feel like I’m living through a transformative moment that will redefine education, research, and so many other fields.
It’s exciting, turbulent, and challenging—but I just can’t look away.
Ross: I couldn’t agree more. It’s a privilege to be alive at this moment, experiencing what it means to think and be human in an age of such transformation.
Thank you for everything you’re doing, Anthea.
Anthea: Thank you for having me.
The post Anthea Roberts on dragonfly thinking, integrating multiple perspectives, human-AI metacognition, and cognitive renaissance (AC Ep73) appeared first on amplifyingcognition.
101 פרקים