For the complete documentation index, see llms.txt
Skip to main content

Prompt customization

Biel.ai offers two modes for controlling how the chatbot responds: AI Assistant (pre-configured prompts) and Custom prompt (full control).

AI Assistant

Pre-configured prompts optimized for technical documentation. Select a version:

VersionReleaseDescription
v3 (Recommended)Feb 2026Friendly and conversational. Gives shorter, code-forward answers, hedges when unsure, and asks clarifying questions instead of guessing.
v2Aug 2025Strict and reference-style. Refuses to answer unless the documentation clearly covers the question.
v1Oct 2024Flexible. Extrapolates from related content and attempts answers even with limited context. Higher hallucination risk.

Which version should I pick?

  • Use v3 for most projects. It produces shorter answers, handles partial context gracefully, and is friendlier with greetings and follow-ups.
  • Use v2 if your audience prefers terse, documentation-style replies, or if you need the chatbot to refuse anything not explicitly covered in your docs.
  • Use v1 only for backwards compatibility with existing projects.

You can extend any version by adding instructions — for example, adding a signature to responses or modifying the tone.

Custom prompt

Write your own prompt for full control over the chatbot's behavior.

warning

Custom prompts may disable built-in safeguards and advanced processing features.

Configure the prompt

important

Only Administrator or Maintainer roles can manage projects. See Manage roles.

  1. In the Biel.ai dashboard, select your project.
  2. Click Settings.
  3. Under LLM settings > Define prompt:
    • Select an AI Assistant version and optionally add extensions, or
    • Select Custom prompt and write your own prompt.
  4. Click Save.

Examples

Extending an AI Assistant prompt:

Always start responses with "Hi!"

Custom prompt:

You are a customer support assistant for [Product Name]. Provide clear, accurate answers related to product features, troubleshooting, and usage. Keep responses professional and focused on the product. Only respond based on the information you can extract from CONTEXT and the conversation HISTORY.