Configure the conversational bot settings
To configure your conversational bot, do the following:
-
In the SmartHub administration portal, click Conversational Search > Conversational Bot Settings.
-
On the Conversational Bot Settings page, complete the following fields:
Setting Description Default value Conversational Bot Name Enter a name for your conversational bot. Conversational Bot Search Engines Select the search engines that you want to retrieve data from. No search engines are selected by default. Metadata Properties Enter a comma-separated list of chunk properties from which the AI will generate a response. ESC_ChunkBody,clickUri,title Maximum number of chunks to use Specify the maximum number of text fragments used to generate the answer 10 Question Rewrite Instructions This setting specifies a set of guidelines or directives aimed at reformulating the user questions based on the chat history Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.
Question Rewrite Template This setting provides a structured guide used to assist in the process of reformulating questions. Do not change the {QuestionRewriteInstructions}, {ChatHistory} and {UserQuestion} placeholders. {QuestionRewriteInstructions}. Chat History: {ChatHistory}. Follow Up Input: {UserQuestion} Answer Generation Instructions Set of guidelines or directives provided to help generate appropriate, accurate, and well-structured responses to questions. {SourceDocsURLs:} must be present in the directives for proof documents to be returned. You are a chatbot engaged in a conversation with a human. When presented with a question and a context composed of parts from multiple documents, provide a concise answer solely based on the provided context. If an answer cannot be derived from the context, respond with: 'Sorry, I don't have the information needed to answer your question'. When you can provide an answer, state it concisely at the beginning of your response without any prefix, and then include the list of the sources **directly** used to derive the answer. Exclude any sources that are irrelevant to the final answer. Return the sources as a list of their clickUri using this exact format:<SourceDocsURLs> Url1; Url2; Url3; etc. </SourceDocsURLs>
Answer Generation Template Structured guide used to assist in the process of generation responses to questions. Do not change the {AnswerGenerationInstructions}, {TextFragments} and {UserProcessedQuestion} placeholders. {AnswerGenerationInstructions}. Context: {TextFragments}. Question: {UserProcessedQuestion} Maximum number of proof documents to return Specify the maximum number of proof documents based on which the response was generated. 5 Original Document Properties Enter a comma-separated list of original document properties returned for the proof documents.
If you are using Smart Preview settings, you must ensure the following properties are requested from each document. The exact value of these properties may vary depending on your source system:- FileType
Size
Chunk_Body
ClickUri
Title
clickUri,title,Rank Large Language Model Type Select the desired Large Language Model (LLM) from the list.
Currently, only the RESTful Large Language Model is available.RESTful LLM Follow Up Questions This setting allows you to configure follow up questions that users can select when interacting with the chatbot. When enabled, the Maximum number of follow up questions to return and Follow Up Questions Generation Template settings will display to allow you to specify your follow up questions configuration.
If this setting is enabled, users will be able to see follow-up questions to their most recent questions when conversing with the chatbot. If a user selects one of the follow-up questions, it will be entered into the conversation. This process will continue for every subsequent question in the conversation with the chatbot.
Disabled Maximum number of follow up questions to return This is the total number of questions you’re expecting the Bot to reply. It has a limitation of 1-5 questions. If you’re not in the expected range of questions, a warning displays. 4 Follow Up Questions Generation Template This setting specifies a prompt that allows the AI to fetch the relevant number of questions in a specific format.
In your prompt, you should not edit the following parts:
-
{NotAnswerable}
-
{MaximumFollowUpQuestionsToReturn}
-
Return format: Question1; Question2; Question3... .
You are a conversation supervisor assisting a junior chatbot. Given a 'Context' and a 'Current Question', follow these rules strictly: 1. If the 'Current Question' is unrelated to the 'Context', return exactly '{NotAnswerable}' and nothing else.2. If the 'Current Question' is related to the 'Context', generate **exactly** {MaximumFollowUpQuestionsToReturn} follow-up questions that maintain the conversation's context.3. Each follow-up question **must be strictly less than 10 words long**.4. The 'Current Question' must **not** be included in the output.5. Return the questions as a **semi-colon separated string**, with no extra text before or after.### **Inputs:**- **Context:** {Context} - **Current Question:** {CurrentQuestion} ### **Output Format Examples:**- If no follow-ups are possible: `{NotAnswerable}`- If generating follow-ups: `Question1; Question2; Question3` Ensure strict adherence to these rules. Do not add explanations, greetings, or extra formatting. -
Provide values for the selected Large Language Model. RESTful Large Language Model settings are described in the below table:
All default values are configured for Azure OpenAI.Setting Description Default value Endpoint URL Enter the endpoint URL of the chosen LLM https://{resource_name}.openai.azure.com/openai/deployments/{deployment_name}/ chat/completions?api- version={openai_api_version} Request Headers Enter the required HTTP headers used to provide information about the request context. There are two types of request headers:
-
Standard header
-
Secure header (containing sensitive information such as an authorization token)
Request header name: api-key Request header value: <your-api-key> Prompt Characters Limit Specify the maximum number of characters that prompt can have. 500000 Question Rewrite Input Format Specify the JSON structure used by the selected LLM to organize the information needed to rewrite a question. Do not change the {ContentPlaceholder} placeholder. {
"temperature": 0,
"messages": [
{
"role": "user",
"content": "{ContentPlaceholder}"
}
]
}Question Rewrite Key Path Enter the full path to the key to ensure accurate extraction of the intended information from the JSON response. choices[0].message.content Answer Generation Input Format Specify the JSON structure used by the selected LLM to organize the information needed to respond a question. Do not change the {ContentPlaceholder} placeholder. {
"temperature": 0,
"stream": true,
"messages": [
{
"role": "user",
"content": "{ContentPlaceholder}"
}
]
}Answer Generation Key Path Enter the full path to the key to ensure accurate extraction of the intended information from the JSON response. data:|choices[0].delta.content|[DONE] Number of LLM request retries Specify the number of retries that are applied for the search request.
When a request is retried, you will see the following in the SmartHub logs:
"There was an issue during the LLM request, but this worked after {Number of LLM request retries} retries."
3 -
-
In the Smart Preview Settings section, you can specify settings for your document previews from your conversational search chats. These document previews will automatically open the document page that is most relevant to the user's question, based on AI-powered content analysis. SmartHub also leverages AI to extract the most important entities from the user's question and the AI's response, highlighting those entities in the preview window. You can provide values for the following Smart Preview settings fields:
If you are using document previews you must be using SmartHub 7.1 and Smart Previews 4.0.x or later. Additionally, you must add the necessary fields under Original Document Properties.Setting Description Default value Phrase match padding length This setting defines how much text, preceding or succeeding the key phrase, is included when extracting a chunk. This ensures more meaningful context when navigating to the relevant section during in- document search actions. 500 Smart Previews Prompt Template This setting specifies the prompt template that is sent to the LLM. This template instructs the model on how to identify and extract the most relevant phrase from the provided content. "Given a Context, a Question, and an Answer generated by an LLM, do the following actions: Step 1: Extract the single most relevant substring from the Context (not the Answer) used to generate the Answer. Preserve all characters exactly, and encode line breaks as escaped \n characters (do not use actual line breaks). Step 2: Extract exactly ten unique, verbatim, three-word phrases from the Context. Step 3: From the given question and its answer, extract the top 5 most meaningful and specific terms that best represent their core purpose. Avoid vague or overly broad terms unless they are part of a named concept or hold technical significance. Order by their importance!"
Smart Previews Input Format This setting specifies a specific JSON structure used to organize the context before sending it to the LLM. You must not modify the {ContentPlaceholder} tag, as it is essential for dynamic content insertion. Copy{
"max_completion_tokens": 8000,
"messages": [
{
"content": "{ContextPlaceholder}",
"role": "user"
}
],
"model": "gpt-4.1",
"parallel_tool_calls": false,
"temperature": 0,
"tool_choice": "required",
"tools": [
{
"function": {
"description": "{ContentPlaceholder}",
"name": "Orchestrator-Extract-Most-Relevant-Information",
"parameters": {
"properties": {
"most_important_entities": {
"description":
"Top 5 most important entities extracted from the
question and from answer in the row of their importance.",
"items": {
"type": "string"
},
"type": "array"
},
"three_word_extractions": {
"description": "Three-word phrases from the context",
"items": {
"type": "string"
},
"type": "array"
},
"top_relevant_phrase": {
"description": "The most relevant phrase from context
for the question and answer",
"type": "string"
}
},
"required": [
"top_relevant_phrase",
"three_word_extractions",
"most_important_entities"
],
"type": "object"
},
"strict": false
},
"type": "function"
}
]
}Smart Previews Key Path This setting specifies the full path to the key in the JSON response from the LLM. This path is used to accurately extract the relevant information. Copychoices[0].message.tool_calls[0].function.arguments -
Click Save.