Understanding agentic RAG in SmartHub Conversational Search
SmartHub Conversational Search uses Agentic Retrieval-Augmented Generation (Agentic RAG) to deliver more accurate, context-aware answers. Instead of relying on a single search and response step, agentic RAG allows SmartHub to reason, retrieve, and act across multiple stages before generating a response.
This approach enables SmartHub to handle complex questions, integrate live systems, consult indexed content, and adapt responses based on retrieved data and conversation context.
What Is Agentic RAG?
Agentic RAG combines:
-
Retrieval-Augmented Generation (RAG) to ground responses in search results and data sources
-
AI agents that can make decisions, invoke tools or APIs, and perform multi-step reasoning
In SmartHub, conversational search is treated as a workflow, not a single model call.
Core Components in SmartHub
Agentic RAG in SmartHub is built around a small set of configurable components:
-
Orchestrator: Determines the overall flow and decides which type of search to run (generic response or using crawled sources).
-
Query Processor: Interprets and refines the user’s input.
-
Response Generator: Produces the final conversational answer.
-
Result Insight: Extracts structured or analytical information from the response.
Together, these components define how SmartHub processes, reasons over, and answers conversational queries.
Actions and External Integration
Agents can perform:
-
Prompt actions for reasoning or transformation steps handled entirely by the model
-
API actions to securely call external systems and retrieve live data
These actions allow SmartHub to move beyond indexed content and incorporate real-time or protected information into conversational search.
Next Steps
After understanding these concepts, administrators can configure:
These settings determine how SmartHub’s agentic RAG behaves in production