Artwork

תוכן מסופק על ידי mstraton8112. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי mstraton8112 או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
Player FM - אפליקציית פודקאסט
התחל במצב לא מקוון עם האפליקציה Player FM !

How Adobe Built A Specialized Concierge EP 55

13:15
 
שתפו
 

Manage episode 518602442 series 3658923
תוכן מסופק על ידי mstraton8112. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי mstraton8112 או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
The Human Touch: Building Reliable AI Assistants with LLMs in the Enterprise Generative AI assistants are demonstrating significant potential to enhance productivity, streamline information access, and improve the user experience within enterprise contexts. These systems serve as intuitive, conversational interfaces to enterprise knowledge, leveraging the impressive capabilities of Large Language Models (LLMs). The domain-specific AI assistant known as Summit Concierge, for instance, was developed for Adobe Summit to handle a wide range of event-related queries, from session recommendations to venue logistics, aiming to reduce the burden on support staff and provide scalable, real-time access to information. While LLMs excel at generating fluent and coherent responses, building a reliable, task-aligned AI assistant rapidly presents several critical challenges. These systems often face hurdles like data sparsity in "cold-start" scenarios and the risk of hallucinations or inaccuracies when handling specific or time-sensitive information. Ensuring that the AI consistently produces trustworthy and contextually grounded answers is essential for user trust and adoption. To address these issues—including data sparsity and the need for reliable quality—developers adopted a human-in-the-loop development paradigm. This hybrid approach integrates human expertise to guide data curation, response validation, and quality monitoring, enabling rapid iteration and reliability without requiring extensive pre-collected data. Techniques used included prompt engineering, documentation-aware retrieval, and synthetic data augmentation to effectively bootstrap the assistant. For quality assurance, human reviewers continuously validated and refined responses. This streamlined process, which used LLM judges to auto-select uncertain cases, significantly reduced the need for manual annotation during evaluation. The real-world deployment of Summit Concierge demonstrated the practical benefits of combining scalable LLM capabilities with lightweight human oversight. This strategy offers a viable path to reliable, domain-specific AI assistants at scale, confirming that agile, feedback-driven development enables robust AI solutions, even under strict timelines
  continue reading

57 פרקים

Artwork
iconשתפו
 
Manage episode 518602442 series 3658923
תוכן מסופק על ידי mstraton8112. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי mstraton8112 או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
The Human Touch: Building Reliable AI Assistants with LLMs in the Enterprise Generative AI assistants are demonstrating significant potential to enhance productivity, streamline information access, and improve the user experience within enterprise contexts. These systems serve as intuitive, conversational interfaces to enterprise knowledge, leveraging the impressive capabilities of Large Language Models (LLMs). The domain-specific AI assistant known as Summit Concierge, for instance, was developed for Adobe Summit to handle a wide range of event-related queries, from session recommendations to venue logistics, aiming to reduce the burden on support staff and provide scalable, real-time access to information. While LLMs excel at generating fluent and coherent responses, building a reliable, task-aligned AI assistant rapidly presents several critical challenges. These systems often face hurdles like data sparsity in "cold-start" scenarios and the risk of hallucinations or inaccuracies when handling specific or time-sensitive information. Ensuring that the AI consistently produces trustworthy and contextually grounded answers is essential for user trust and adoption. To address these issues—including data sparsity and the need for reliable quality—developers adopted a human-in-the-loop development paradigm. This hybrid approach integrates human expertise to guide data curation, response validation, and quality monitoring, enabling rapid iteration and reliability without requiring extensive pre-collected data. Techniques used included prompt engineering, documentation-aware retrieval, and synthetic data augmentation to effectively bootstrap the assistant. For quality assurance, human reviewers continuously validated and refined responses. This streamlined process, which used LLM judges to auto-select uncertain cases, significantly reduced the need for manual annotation during evaluation. The real-world deployment of Summit Concierge demonstrated the practical benefits of combining scalable LLM capabilities with lightweight human oversight. This strategy offers a viable path to reliable, domain-specific AI assistants at scale, confirming that agile, feedback-driven development enables robust AI solutions, even under strict timelines
  continue reading

57 פרקים

כל הפרקים

×
 
Loading …

ברוכים הבאים אל Player FM!

Player FM סורק את האינטרנט עבור פודקאסטים באיכות גבוהה בשבילכם כדי שתהנו מהם כרגע. זה יישום הפודקאסט הטוב ביותר והוא עובד על אנדרואיד, iPhone ואינטרנט. הירשמו לסנכרון מנויים במכשירים שונים.

 

מדריך עזר מהיר

האזן לתוכנית הזו בזמן שאתה חוקר
הפעלה