Configure the conversational bot settings
To configure your conversational bot, do the following:
-
In the SmartHub administration portal, click Conversational Search > Conversational Bot Settings.
-
On the Conversational Bot Settings page, complete the following fields:
Setting Description Default value Conversational Bot Name Enter a name for your conversational bot. Conversational Bot Search Engines Select the search engines that you want to retrieve data from. No search engines are selected by default. Metadata Properties Enter a comma-separated list of chunk properties from which the AI will generate a response. ESC_ChunkBody,clickUri,title Maximum number of chunks to use Specify the maximum number of text fragments used to generate the answer 10 Question Rewrite Instructions This setting specifies a set of guidelines or directives aimed at reformulating the user questions based on the chat history Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.
Question Rewrite Template This setting provides a structured guide used to assist in the process of reformulating questions. Do not change the {QuestionRewriteInstructions}, {ChatHistory} and {UserQuestion} placeholders. {QuestionRewriteInstructions}. Chat History: {ChatHistory}. Follow Up Input: {UserQuestion} Answer Generation Instructions Set of guidelines or directives provided to help generate appropriate, accurate, and well-structured responses to questions. {SourceDocsURLs:} must be present in the directives for proof documents to be returned. You are a chatbot engaged in a conversation with a human. When presented with a question and a context composed of parts from multiple documents, provide a concise answer solely based on the provided context. If an answer cannot be derived from the context, respond with: 'Sorry, I don't have the information needed to answer your question'. When you can provide an answer, state it concisely at the beginning of your response without any prefix, and then include the list of the sources **directly** used to derive the answer. Exclude any sources that are irrelevant to the final answer. Return the sources as a list of their clickUri using this exact format:<SourceDocsURLs> Url1; Url2; Url3; etc. </SourceDocsURLs>
Answer Generation Template Structured guide used to assist in the process of generation responses to questions. Do not change the {AnswerGenerationInstructions}, {TextFragments} and {UserProcessedQuestion} placeholders. {AnswerGenerationInstructions}. Context: {TextFragments}. Question: {UserProcessedQuestion} Maximum number of proof documents to return Specify the maximum number of proof documents based on which the response was generated. 5 Original Document Properties Enter a comma-separated list of original document properties returned for the proof documents. clickUri,title,Rank Large Language Model Type Select the desired Large Language Model (LLM) from the list.
Currently, only the RESTful Large Language Model is available.RESTful LLM Follow Up Questions This setting allows you to configure follow up questions that users can select when interacting with the chatbot. When enabled, the Maximum number of follow up questions to return and Follow Up Questions Generation Template settings will display to allow you to specify your follow up questions configuration.
If this setting is enabled, users will be able to see follow-up questions to their most recent questions when conversing with the chatbot. If a user selects one of the follow-up questions, it will be entered into the conversation. This process will continue for every subsequent question in the conversation with the chatbot.
Disabled Maximum number of follow up questions to return This is the total number of questions you’re expecting the Bot to reply. It has a limitation of 1-5 questions. If you’re not in the expected range of questions, a warning displays. 4 Follow Up Questions Generation Template This setting specifies a prompt that allows the AI to fetch the relevant number of questions in a specific format.
In your prompt, you should not edit the following parts:
-
{NotAnswerable}
-
{MaximumFollowUpQuestionsToReturn}
-
Return format: Question1; Question2; Question3... .
You are a conversation supervisor assisting a junior chatbot. Given a 'Context' and a 'Current Question', follow these rules strictly: 1. If the 'Current Question' is unrelated to the 'Context', return exactly '{NotAnswerable}' and nothing else.2. If the 'Current Question' is related to the 'Context', generate **exactly** {MaximumFollowUpQuestionsToReturn} follow-up questions that maintain the conversation's context.3. Each follow-up question **must be strictly less than 10 words long**.4. The 'Current Question' must **not** be included in the output.5. Return the questions as a **semi-colon separated string**, with no extra text before or after.### **Inputs:**- **Context:** {Context} - **Current Question:** {CurrentQuestion} ### **Output Format Examples:**- If no follow-ups are possible: `{NotAnswerable}`- If generating follow-ups: `Question1; Question2; Question3` Ensure strict adherence to these rules. Do not add explanations, greetings, or extra formatting. -
-
Provide values for the selected Large Language Model. RESTful Large Language Model settings are described in the below table:
All default values are configured for Azure OpenAI.Setting Description Default value Endpoint URL Enter the endpoint URL of the chosen LLM https://{resource_name}.openai.azure.com/openai/deployments/{deployment_name}/ chat/completions?api- version={openai_api_version} Request Headers Enter the required HTTP headers used to provide information about the request context. There are two types of request headers:
-
Standard header
-
Secure header (containing sensitive information such as an authorization token)
Request header name: api-key Request header value: <your-api-key> Prompt Characters Limit Specify the maximum number of characters that prompt can have. 500000 Question Rewrite Input Format Specify the JSON structure used by the selected LLM to organize the information needed to rewrite a question. Do not change the {ContentPlaceholder} placeholder. {
"temperature": 0,
"messages": [
{
"role": "user",
"content": "{ContentPlaceholder}"
}
]
}Question Rewrite Key Path Enter the full path to the key to ensure accurate extraction of the intended information from the JSON response. choices[0].message.content Answer Generation Input Format Specify the JSON structure used by the selected LLM to organize the information needed to respond a question. Do not change the {ContentPlaceholder} placeholder. {
"temperature": 0,
"stream": true,
"messages": [
{
"role": "user",
"content": "{ContentPlaceholder}"
}
]
}Answer Generation Key Path Enter the full path to the key to ensure accurate extraction of the intended information from the JSON response. data:|choices[0].delta.content|[DONE] Number of LLM request retries Specify the number of retries that are applied for the search request.
When a request is retried, you will see the following in the SmartHub logs:
"There was an issue during the LLM request, but this worked after {Number of LLM request retries} retries."
3 -
-
Click Save.