
September 2023 Releases
LLMs can only generate responses based on what they’ve been trained on. They have no built-in access to proprietary business logic, documentation, or contact-specific data.
Some of that can be solved through prompting.
To drive consistent, best-in-class AI experiences, however, prompting alone isn’t enough. You need a way to dynamically enrich agent context with hyper-relevant information—for any user, at any given time, without sacrificing performance.
That's where retrieval comes into play. Retrieval-Augmented Generation (RAG) further determines what your agent says and when, by dynamically pulling data from trusted, proprietary data sources.
Regal’s new Knowledge Base feature enables Retrieval-Augmented Generation (RAG) directly within the Regal platform, allowing Voice AI Agents to dynamically retrieve information from your internal databases at runtime.
So, instead of having to build and maintain massive prompts or custom API/ETL pipelines (which introduce latency, more failure points, and ongoing maintenance costs), you’re able to have AI Agents pull real context from your data, while preserving low latency, and lower costs, at any scale.
Regal’s Knowledge Base feature is a new capability that connects AI Agents to your proprietary data sources (product catalogs/documentation, policy docs, support guides, FAQs, etc.).
1. Agent-Level Knowledge Configuration
Each agent can have a customized set of Knowledge Base (KB). And each KB can be connected to more than one agent via the Regal platform.
2. Smart, Dynamic Retrieval
Retrieval is triggered dynamically at runtime. Regal invokes the KB only when needed, minimizing unnecessary context injection. This means tighter control, keeping standard call flows clean and performant.
3. Optimized for Relevance and Speed
Retrieval is powered by vector-based search using Amazon Bedrock. The system fetches only the most contextually relevant chunks, which keeps response time low and improves answer precision.
Responses can be further guided by the guardrails and logic defined in your prompts.
4. Flexible Data Ingestion
Knowledge can be ingested from web URLs, PDFs, DOC files, or raw long-form text. So you can centralize data from existing documentation without reformatting or writing new code.
For internal-facing resources like help docs, support wikis, or policy guides that evolve regularly, ingesting via URL allows Regal to periodically rescrape the source. This keeps your AI Agent’s knowledge up to date without requiring manual edits to the agent itself.
5. Context-Aware Usage
Each KB includes a structured usage description that informs the AI Agent when and how to use the content. This allows you to govern retrieval behavior, reducing the risk of hallucinations and improving consistency in how your agents apply the knowledge.
The Regal Knowledge Base gives you a scalable way to inject proprietary knowledge into your AI Agents so they can deliver smarter, brand-aligned, compliant conversations, without relying on bloated prompting or custom integrations.
Here’s what the feature enables:
Because KB content is separate from the prompt, your agent’s tone and logic stay tight, even as you scale up the knowledge they use on calls.
Regal AI Agents fetch answers in real-time, while always reverting to the correct source of truth.
RAG enables us to determine when the LLM should or should not be relied on. When the answer requires a more grounded or branded response, we never rely on the LLM.
This allows agents to handle everything from lightweight inquiries, to complex, regulated scenarios, ensuring every response is contextually accurate, brand-safe, and compliant.
Decoupling the KB from the prompt keeps token size small, which leads to faster LLM responses and lower processing costs (while still allowing you to prompt and build guardrails as strictly as you’d like).
Whether it’s answering product FAQs, scheduling appointments, or resolving policy-related questions, the RAG approach ensures that agents retrieve only the most relevant knowledge based on the specific context of the contact and the call itself.
If a contact is based in California, for instance, the AI Agent can pull California data only, referencing legislation, product availability, and other FAQs. It keeps conversations compliant, targeted, and useful.
Regal’s Knowledge Base feature is ideal for use cases where the AI Agent must reference detailed, dynamic, or regulated content mid-call.
For short, predictable content, you can use prompting to dictate LLM responses. For scalable, more dynamic information, use a Knowledge Base.
Common Use Cases Include:
Why It Matters for Your Business
RAG-based Knowledge Bases are a key enabler of real AI transformation, because they help:
With Regal Knowledge Bases, your AI Agents can now deliver more accurate conversations at scale, while giving your team full control over what’s being said.
You can configure Knowledge Bases directly inside the Regal Agent Builder. Here’s how:
Manually monitoring scripts and measuring containment rate (depending on the use case) will give you a good sense of whether or not the AI is effectively referencing your knowledge bases.
Tip: Start with 1–2 KBs per agent, and expand from there if needed. Tune from there based on call type and complexity.
See detailed instructions on how to add a Knowledge Base to Regal.
With Regal Knowledge Bases, you can scale your AI Agents with real data, without sacrificing speed, performance, or control.
Want help launching your first KB-backed agent? Book time with our team.
Ready to see Regal in action?
Book a personalized demo.