Hybrid Search
Hybrid Search combines the strengths of keyword and semantic search methods to optimize performance and deliver superior search results. By integrating keyword-based, natural language, and neural search methods, it provides a comprehensive search experience that adapts to diverse user needs and query types.
With Hybrid search, you can find relevant solutions that match your query even if you don't know the exact document name/keyword. Whether you are looking for precise details or more general information, Hybrid Search ensures that you get the most relevant results. It effectively bridges the gap between traditional search methods and advanced AI-driven insights. Our hybrid search adapts dynamically to the way you type your query. That means the system automatically adjusts how it looks for results depending on whether your search is short or long.
-
Short searches (1 word) — The search focuses more on exact matches, helping you find specific terms quickly.
-
Medium-length searches (2–4 words) — The system blends exact matches with context understanding, giving you more relevant results.
-
Long searches (5+ words) — The search leans on advanced language understanding to grasp what you mean and surface the best article right away.
- Keyword-Based Search: Uses traditional search techniques to quickly retrieve results based on exact or partial keyword matches.
-
Natural Language Search: Improves user experience by understanding and processing queries in natural, conversational language.
-
Neural Search: Adds a layer of intelligence by understanding the context, nuances, and intent behind queries for more accurate and relevant results.
RightAnswers Neural & Hybrid Search leverages state-of-the-art Large Language Models (LLMs) and a robust infrastructure to deliver highly accurate and context-aware search results. This section provides insights into the model selection, server configurations, model worker configurations, re-indexing processes, and embedding service setup.
Embedding Service Configuration
-
Purpose: The embedding service converts textual data into vector representations that the neural search engine can process. This step is critical for enabling the search engine to understand and retrieve contextually relevant information.
-
Model Compatibility: The service must be configured to support models that generate embeddings compatible with the Apache Solr search infrastructure.
Re-indexing
Re-indexing is the process of updating the search index with new or modified data. This step is crucial to ensure that the neural search engine can accurately and efficiently retrieve relevant results.
When changes like enabling embeddings, adding new data, or updating the model used for generating embeddings are made, the search index must be refreshed to reflect these updates. This ensures that the search engine has the most current and contextually accurate information, leading to improved search accuracy and performance.
-
Re-indexing Time: Typically ranges from 15 minutes to 3 hours, depending on data size and model complexity.
Example: Re-indexing 58.1k solutions with an average size of 35.2k per document takes approximately 3 hours and 7 minutes. This highlights the importance of having optimal configuration and hardware.
-
Automatic Vectorization: When publishing a solution in Solution Manager, it will be automatically vectorized. Re-indexing of the knowledgebase is required only for the initial deployment of Neural/Hybrid Search or if any maintenance is required.
Getting Started
To start using Neural or Hybrid search, please contact your Customer Success Manager (CSM).