Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
TV & Film
Technology
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/2e/b9/53/2eb95379-a99b-8117-0c0d-a6a4f71cb08e/mza_12868312400078978105.jpg/600x600bb.jpg
Robots Talking
mstraton8112
55 episodes
2 weeks ago
Show more...
Technology
RSS
All content for Robots Talking is the property of mstraton8112 and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Show more...
Technology
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/2e/b9/53/2eb95379-a99b-8117-0c0d-a6a4f71cb08e/mza_12868312400078978105.jpg/600x600bb.jpg
How Adobe Built A Specialized Concierge
Robots Talking
13 minutes
2 weeks ago
How Adobe Built A Specialized Concierge
The Human Touch: Building Reliable AI Assistants with LLMs in the Enterprise Generative AI assistants are demonstrating significant potential to enhance productivity, streamline information access, and improve the user experience within enterprise contexts. These systems serve as intuitive, conversational interfaces to enterprise knowledge, leveraging the impressive capabilities of Large Language Models (LLMs). The domain-specific AI assistant known as Summit Concierge, for instance, was developed for Adobe Summit to handle a wide range of event-related queries, from session recommendations to venue logistics, aiming to reduce the burden on support staff and provide scalable, real-time access to information. While LLMs excel at generating fluent and coherent responses, building a reliable, task-aligned AI assistant rapidly presents several critical challenges. These systems often face hurdles like data sparsity in "cold-start" scenarios and the risk of hallucinations or inaccuracies when handling specific or time-sensitive information. Ensuring that the AI consistently produces trustworthy and contextually grounded answers is essential for user trust and adoption. To address these issues—including data sparsity and the need for reliable quality—developers adopted a human-in-the-loop development paradigm. This hybrid approach integrates human expertise to guide data curation, response validation, and quality monitoring, enabling rapid iteration and reliability without requiring extensive pre-collected data. Techniques used included prompt engineering, documentation-aware retrieval, and synthetic data augmentation to effectively bootstrap the assistant. For quality assurance, human reviewers continuously validated and refined responses. This streamlined process, which used LLM judges to auto-select uncertain cases, significantly reduced the need for manual annotation during evaluation. The real-world deployment of Summit Concierge demonstrated the practical benefits of combining scalable LLM capabilities with lightweight human oversight. This strategy offers a viable path to reliable, domain-specific AI assistants at scale, confirming that agile, feedback-driven development enables robust AI solutions, even under strict timelines
Robots Talking