Dynamiq Docs
  • Welcome to Dynamiq
  • Low-Code Builder
    • Chat
    • Basics
    • Connecting Nodes
    • Conditional Nodes and Multiple Outputs
    • Input and Output Transformers
    • Error Handling and Retries
    • LLM Nodes
    • Validator Nodes
    • RAG Nodes
      • Indexing Workflow
        • Pre-processing Nodes
        • Document Splitting
        • Document Embedders
        • Document Writers
      • Inference RAG workflow
        • Text embedders
        • Document retrievers
          • Complex retrievers
        • LLM Answer Generators
    • LLM Agents
      • Basics
      • Guide to Implementing LLM Agents: ReAct and Simple Agents
      • Guide to Agent Orchestration: Linear and Adaptive Orchestrators
      • Guide to Advanced Agent Orchestration: Graph Orchestrator
    • Audio and voice
    • Tools and External Integrations
    • Python Code in Workflows
    • Memory
    • Guardrails
  • Deployments
    • Workflows
      • Tracing Workflow Execution
    • LLMs
      • Fine-tuned Adapters
      • Supported Models
    • Vector Databases
  • Prompts
    • Prompt Playground
  • Connections
  • LLM Fine-tuning
    • Basics
    • Using Adapters
    • Preparing Data
    • Supported Models
    • Parameters Guide
  • Knowledge Bases
  • Evaluations
    • Metrics
      • LLM-as-a-Judge
      • Predefined metrics
        • Faithfulness
        • Context Precision
        • Context Recall
        • Factual Correctness
        • Answer Correctness
      • Python Code Metrics
    • Datasets
    • Evaluation Runs
    • Examples
      • Build Accurate vs. Inaccurate Workflows
  • Examples
    • Building a Search Assistant
      • Approach 1: Single Agent with a Defined Role
      • Approach 2: Adaptive Orchestrator with Multiple Agents
      • Approach 3: Custom Logic Pipeline with a Straightforward Workflow
    • Building a Code Assistant
  • Platform Settings
    • Access Keys
    • Organizations
    • Settings
    • Billing
  • On-premise Deployment
    • AWS
    • IBM
  • Support Center
Powered by GitBook
On this page
  • Steps to Use the Chat Feature
  • Chat Modes
  • Features of the Chat Interface
  • Best Practices
  1. Low-Code Builder

Chat

PreviousLow-Code BuilderNextBasics

Last updated 5 months ago

The chat feature allows users to interact with deployed workflows directly through a chat-like interface. This page outlines the steps and functionality associated with setting up and using the chat feature, including the ability to utilize workflows, orchestrators, and agents in a conversational manner. The chat feature supports both streaming and full-response modes.

Steps to Use the Chat Feature

1. Create a Workflow

  • To use the chat feature, you must first create a workflow in the Workflows section of the platform.

  • Workflows can include combinations of LLMs, agents, orchestrators, or other workflows to define the logic and flow of operations.

  • Ensure that your workflow meets your desired conversational requirements.

2. Deploy the Workflow

  • After creating a workflow, you need to deploy it to be able to access it from the chat interface.

  • You can manage and update workflows post-deployment to enhance functionality.

3. Access the Chat Interface

  • Navigate to the Chat option on the platform under the creator drop down in the top right corner

  • In the chat interface, select the deployed workflow you wish to interact with from the dropdown menu.

  • The workflows available in the dropdown will correspond to the workflows you have created and deployed.


Chat Modes

Streaming Mode

  • When enabled, streaming mode allows responses to be delivered progressively in real time.

  • This mode is particularly useful for lengthy responses or when a fast interaction is required.

Full-Response Mode

  • Full-response mode sends the complete response at once after processing is finished.

  • This mode is useful for responses where accuracy and completeness are prioritized over speed.

You can toggle between these modes based on the requirements of your conversation.

Features of the Chat Interface

Workflow Integration

  • Use the chat interface to test, debug, or interact with your workflows.

  • The chat supports a diverse range of workflows, including those that involve:

    • LLMs for generating content or answering queries.

    • Agents for decision-making or task execution.

    • Orchestrators to coordinate multiple workflows or actions.

Search and Selection

The interface allows you to search for specific workflows from a dropdown list, making it easy to locate and use a desired workflow.

Custom Interaction

The chat feature supports workflows tailored for specific use cases, such as data analysis, customer service, or content generation.


Best Practices

Test Your Workflow

  • Before using a workflow in the chat feature, thoroughly test it to ensure it behaves as expected.

  • Debug any issues during the creation phase.

Choose the Right Mode

  • For dynamic, real-time conversations, use Streaming Mode.

  • For structured and detailed interactions, opt for Full-Response Mode.

Iterate and Improve

  • Use the feedback from chat interactions to refine your workflows for better results.

The chat feature provides a flexible and interactive way to engage with your workflows, making it ideal for a wide range of use cases. By creating, deploying, and fine-tuning workflows, you can unlock the full potential of this functionality 👩‍🚀

Open the chat and your worflow from the dropdown
Test your simple agent using the chat interface