Admins can add and manage custom LLM and embedding endpoints within QAnswer. This includes defining provider type, modality (text, image), sensitivity level, and connection parameters (endpoint URL, API key). This feature is restricted to organization admins to ensure secure configuration and governance.
QAnswer now integrates with Confluence to enable AI-driven retrieval across pages, spaces, and project documentation. The assistant processes Confluence content at scale, applies semantic search to identify the most relevant fragments, and delivers contextual answers directly in chat. With support for multilingual queries and heterogeneous datasets, this integration reduces time spent navigating Confluence manually and ensures consistent, transparent access to organizational knowledge.
General Chat can now automatically route questions to the right assistant. Each query is checked against all available assistants, and if one is a good match, the system forwards the query to it. The interface clearly shows which assistant is being used, so users can see how their request is handled. This makes it possible to work with multiple assistants in one place without manual switching.
We’ve made a series of refinements across QAnswer to improve stability, usability, and overall performance. From smarter chat interactions to enhanced search, new connectors, and UI polish, these updates make everyday use faster and more seamless. Click below to explore the full list of improvements.
View All Improvements
Smarter Chat Experience - Agentic Messages: The chatbot now explains actions it takes, so you’re always in the loop. - Facets in Chat: Easily filter and refine answers directly within the chatbot.
New Integrations & Connectors - Docling-as-a-Service: Seamless access to external docling service. - SharePoint Search by Item Types: Find the right kind of content faster.
User Experience & UI Improvements - Quota Exceeded Redirect: Users are now guided to their page when limits are reached. - Access-PDF Experience: New loader and better disabling for a smoother experience. - Role-based Access: Paths are now restricted based on user roles. - General CSS Fixes: Cleaner design, reduced scrolling issues, and a more polished look.
Better Search & Filtering - Improved Facets: Auto-generated filters are now clearer and easier to use. - Auto Date Filtering: Search results can now be narrowed down by date automatically. - Facets on Search: Enhanced filters to help you quickly find what you need. - Cited Sources with Dates & Icons: Sources now include refresh dates (optional) and helpful icons.
Customisation & Control - Banner for LLM Types: Clear visibility whether you’re using QAnswer’s own models (on-premise) or external ones. - Clip on Assistants: Upload data directly in assistant task chats, synced with the global chatbot clip feature.
QAnswer now offers more granular source-level traceability for answers pulled from large documents. It highlights precise sentence fragments, anchors them with numbered references, and scrolls directly to their location in the file. Whether working with contracts, specs, policy docs, or multi-section reports, this update reduces the overhead of manual verification and improves transparency across complex datasets.
Claude 3.7 Sonnet is now available in QAnswer, fully hosted within the EU. Known for its balanced performance on reasoning, summarization, and structured outputs, it’s a solid option for enterprise use cases. Teams can now select Claude alongside other available models like QAnswer LLM, GPT-4o Mini, or Mistral Medium—choosing the one that best fits their technical and compliance requirements.
Prompt transparency is now available in QAnswer. For every generated response, users can inspect the full prompt structure: system instructions, retrieved documents, and the final output. This feature allows teams to verify how a response was constructed. Legal departments can review the full trace for compliance, AI engineers can debug assistant behavior more precisely, and analysts can check for consistency in responses. The full input-output chain is now visible, making the system auditable and easier to validate in regulated or high-responsibility environments.
QAnswer now supports faceted search. Users can combine natural language queries with structured filters such as file type, author, or date range to refine results. Filters are inferred automatically from the indexed content using both metadata and LLM-based analysis. They adjust dynamically based on the structure and content of the dataset. This enables more targeted queries, improves control over large result sets, and supports use cases that require both flexibility and precision.
QAnswer now supports human takeover, enabling a seamless switch from AI responses to live human agents within the same chat interface. When triggered, the assigned staff receives an email notification with a direct link to join the session and respond manually. This is ideal for workflows requiring human validation, escalation, or oversight—especially in sensitive or regulated contexts. The feature integrates natively and supports hybrid support models out of the box.
Smarter Report Generation: Templates, Structure, and Reusability
v2.0.0
Report generation has been a core use case for QAnswer, particularly in workflows where summaries or structured documents follow predefined formats. This release introduces three improvements to streamline that process. Users can now create reusable templates to standardize output across recurring tasks. External formats—such as Word, Markdown, or HTML—can be imported and applied directly. The system also enforces better control over structure and formatting, ensuring that generated content aligns more closely with the expected layout and reduces the need for manual adjustments.
Smarter Acronym Handling: Retrieval and Prompt Awareness
v2.0.0
Acronym definitions (e.g., CPNP: Cosmetic Products Notification Portal) were previously only used during retrieval, helping the system find relevant sources but not improving how the LLM interpreted the query itself. With this release, acronym data is now also injected into the LLM prompt. This allows the model to understand and respond more accurately to queries that reference acronyms—even when the full term isn’t present in the user’s input. No changes are required in how acronym data is added; the enhancement improves coverage and consistency across both search and generation.
Sri Kalidindi
AI Engineer
Essayez QAnswer gratuitement !
Découvrez comment notre IA peut révolutionner la gestion de vos données