

בחסות
Making Sense of Artificial Intelligence: Why Governing AI and LLMs is Crucial
Artificial intelligence (AI) is changing our world rapidly, from the tools we use daily to complex systems impacting national security and the economy. With the rise of powerful large language models (LLMs) like GPT-4, which are often the foundation for other AI tools, the potential benefits are huge, but so are the risks. How do we ensure this incredible technology helps society while minimizing dangers like deep fakes, job displacement, or misuse?
A recent policy brief from experts at MIT and other institutions explores this very question, proposing a framework for governing artificial intelligence in the U.S.
Starting with What We Already Know
One of the core ideas is to start by applying existing laws and regulations to activities involving AI. If an activity is regulated when a human does it (like providing medical advice, making financial decisions, or hiring), then using AI for that same activity should also be regulated by the same body. This means existing agencies like the FDA (for medical AI) or financial regulators would oversee AI in their domains. This approach uses familiar rules where possible and automatically covers many high-risk AI applications because those areas are already regulated. It also helps prevent AI from being used specifically to bypass existing laws.
Of course, AI is different from human activity. For example, artificial intelligence doesn't currently have "intent," which many laws are based on. Also, AI can have capabilities humans lack, like finding complex patterns or creating incredibly realistic fake images ("deep fakes"). Because of these differences, the rules might need to be stricter for AI in some cases, particularly regarding things like privacy, surveillance, and creating fake content. The brief suggests requiring AI-generated images to be clearly marked, both for humans and machines.
Understanding What AI Does
Since the technology changes so fast, the brief suggests defining AI for regulatory purposes not by technical terms like "large language model" or "foundation model," but by what the technology does. For example, defining it as "any technology for making decisions or recommendations, or for generating content (including text, images, video or audio)" might be more effective and align better with applying existing laws based on activities.
Knowing How AI Works (or Doesn't)
Who's Responsible? The AI Stack and the "Fork in the Toaster"
Many AI applications are built using multiple AI systems together, like using a general LLM as the base for a specialized hiring tool. This is called an "AI stack". Generally, the provider and user of the final application should be responsible. However, if a component within that stack, like the foundational artificial intelligence model, doesn't perform as promised, its provider might share responsibility. Those building on general-purpose AI should seek guarantees about how it will perform for their specific use. Auditing the entire stack, not just individual parts, is also important due to unexpected interactions.
The brief uses the analogy of putting a "fork in the toaster" to explain user responsibility. Users shouldn't be held responsible if they use an AI system irresponsibly in a way that wasn't clearly warned against, especially if the provider could have foreseen or prevented it. Providers need to clearly spell out proper uses and implement safeguards. Ultimately, the provider is generally responsible unless they can show the user should have known the use was irresponsible and the problem was unforeseeable or unpreventable by the provider.
Special Considerations for General Purpose AI (like LLMs)
Providers of broad artificial intelligence systems like GPT-4 cannot possibly know all the ways their systems might be used. But these systems pose risks because they are widely available and can be used for almost anything.
The government could require providers of general AI systems to:
Even with these measures, general artificial intelligence systems must still comply with all existing laws that apply to human activities. Providers might also face more severe responsibility if problems arise from foreseeable uses they didn't adequately prevent or warn against with clear, prominent instructions.
The Challenge of Intellectual Property
Another big issue with AI, particularly generative artificial intelligence like LLMs that create content, is how they interact with intellectual property (IP) rights, like copyright. While courts say only humans can own IP, it's unclear how IP laws apply when AI is involved. Using material from the internet to train AI systems is currently assumed not to be copyright infringement, but this is being challenged. While training doesn't directly produce content, the AI use might. It's an open question whether AI-generated infringing content will be easier or harder to identify than human-generated infringement. Some AI systems might eventually help by referencing original sources. Some companies are starting to offer legal defense for paying customers against copyright claims related to AI-generated content, provided users followed safety measures.
Moving Forward
The policy brief concludes that the current situation regarding AI governance is somewhat of a "buyer beware" (caveat emptor) environment. It's often unclear how existing laws apply, and there aren't enough clear rules or incentives to proactively find and fix problems in risky systems. Users of systems built on top of general AI also lack sufficient information and recourse if things go wrong. To fully realize the benefits of artificial intelligence, more clarity and oversight are needed.
Achieving this will likely require a mix of adapting existing regulations, possibly creating a new, narrowly-focused AI agency to handle issues outside current domains, developing standards (perhaps through an organization similar to those overseeing financial audits), and encouraging more research into making AI systems safer and more beneficial.
50 פרקים
Making Sense of Artificial Intelligence: Why Governing AI and LLMs is Crucial
Artificial intelligence (AI) is changing our world rapidly, from the tools we use daily to complex systems impacting national security and the economy. With the rise of powerful large language models (LLMs) like GPT-4, which are often the foundation for other AI tools, the potential benefits are huge, but so are the risks. How do we ensure this incredible technology helps society while minimizing dangers like deep fakes, job displacement, or misuse?
A recent policy brief from experts at MIT and other institutions explores this very question, proposing a framework for governing artificial intelligence in the U.S.
Starting with What We Already Know
One of the core ideas is to start by applying existing laws and regulations to activities involving AI. If an activity is regulated when a human does it (like providing medical advice, making financial decisions, or hiring), then using AI for that same activity should also be regulated by the same body. This means existing agencies like the FDA (for medical AI) or financial regulators would oversee AI in their domains. This approach uses familiar rules where possible and automatically covers many high-risk AI applications because those areas are already regulated. It also helps prevent AI from being used specifically to bypass existing laws.
Of course, AI is different from human activity. For example, artificial intelligence doesn't currently have "intent," which many laws are based on. Also, AI can have capabilities humans lack, like finding complex patterns or creating incredibly realistic fake images ("deep fakes"). Because of these differences, the rules might need to be stricter for AI in some cases, particularly regarding things like privacy, surveillance, and creating fake content. The brief suggests requiring AI-generated images to be clearly marked, both for humans and machines.
Understanding What AI Does
Since the technology changes so fast, the brief suggests defining AI for regulatory purposes not by technical terms like "large language model" or "foundation model," but by what the technology does. For example, defining it as "any technology for making decisions or recommendations, or for generating content (including text, images, video or audio)" might be more effective and align better with applying existing laws based on activities.
Knowing How AI Works (or Doesn't)
Who's Responsible? The AI Stack and the "Fork in the Toaster"
Many AI applications are built using multiple AI systems together, like using a general LLM as the base for a specialized hiring tool. This is called an "AI stack". Generally, the provider and user of the final application should be responsible. However, if a component within that stack, like the foundational artificial intelligence model, doesn't perform as promised, its provider might share responsibility. Those building on general-purpose AI should seek guarantees about how it will perform for their specific use. Auditing the entire stack, not just individual parts, is also important due to unexpected interactions.
The brief uses the analogy of putting a "fork in the toaster" to explain user responsibility. Users shouldn't be held responsible if they use an AI system irresponsibly in a way that wasn't clearly warned against, especially if the provider could have foreseen or prevented it. Providers need to clearly spell out proper uses and implement safeguards. Ultimately, the provider is generally responsible unless they can show the user should have known the use was irresponsible and the problem was unforeseeable or unpreventable by the provider.
Special Considerations for General Purpose AI (like LLMs)
Providers of broad artificial intelligence systems like GPT-4 cannot possibly know all the ways their systems might be used. But these systems pose risks because they are widely available and can be used for almost anything.
The government could require providers of general AI systems to:
Even with these measures, general artificial intelligence systems must still comply with all existing laws that apply to human activities. Providers might also face more severe responsibility if problems arise from foreseeable uses they didn't adequately prevent or warn against with clear, prominent instructions.
The Challenge of Intellectual Property
Another big issue with AI, particularly generative artificial intelligence like LLMs that create content, is how they interact with intellectual property (IP) rights, like copyright. While courts say only humans can own IP, it's unclear how IP laws apply when AI is involved. Using material from the internet to train AI systems is currently assumed not to be copyright infringement, but this is being challenged. While training doesn't directly produce content, the AI use might. It's an open question whether AI-generated infringing content will be easier or harder to identify than human-generated infringement. Some AI systems might eventually help by referencing original sources. Some companies are starting to offer legal defense for paying customers against copyright claims related to AI-generated content, provided users followed safety measures.
Moving Forward
The policy brief concludes that the current situation regarding AI governance is somewhat of a "buyer beware" (caveat emptor) environment. It's often unclear how existing laws apply, and there aren't enough clear rules or incentives to proactively find and fix problems in risky systems. Users of systems built on top of general AI also lack sufficient information and recourse if things go wrong. To fully realize the benefits of artificial intelligence, more clarity and oversight are needed.
Achieving this will likely require a mix of adapting existing regulations, possibly creating a new, narrowly-focused AI agency to handle issues outside current domains, developing standards (perhaps through an organization similar to those overseeing financial audits), and encouraging more research into making AI systems safer and more beneficial.
50 פרקים
Player FM סורק את האינטרנט עבור פודקאסטים באיכות גבוהה בשבילכם כדי שתהנו מהם כרגע. זה יישום הפודקאסט הטוב ביותר והוא עובד על אנדרואיד, iPhone ואינטרנט. הירשמו לסנכרון מנויים במכשירים שונים.