Dynamiq Docs
  • Welcome to Dynamiq
  • Low-Code Builder
    • Chat
    • Basics
    • Connecting Nodes
    • Conditional Nodes and Multiple Outputs
    • Input and Output Transformers
    • Error Handling and Retries
    • LLM Nodes
    • Validator Nodes
    • RAG Nodes
      • Indexing Workflow
        • Pre-processing Nodes
        • Document Splitting
        • Document Embedders
        • Document Writers
      • Inference RAG workflow
        • Text embedders
        • Document retrievers
          • Complex retrievers
        • LLM Answer Generators
    • LLM Agents
      • Basics
      • Guide to Implementing LLM Agents: ReAct and Simple Agents
      • Guide to Agent Orchestration: Linear and Adaptive Orchestrators
      • Guide to Advanced Agent Orchestration: Graph Orchestrator
    • Audio and voice
    • Tools and External Integrations
    • Python Code in Workflows
    • Memory
    • Guardrails
  • Deployments
    • Workflows
      • Tracing Workflow Execution
    • LLMs
      • Fine-tuned Adapters
      • Supported Models
    • Vector Databases
  • Prompts
    • Prompt Playground
  • Connections
  • LLM Fine-tuning
    • Basics
    • Using Adapters
    • Preparing Data
    • Supported Models
    • Parameters Guide
  • Knowledge Bases
  • Evaluations
    • Metrics
      • LLM-as-a-Judge
      • Predefined metrics
        • Faithfulness
        • Context Precision
        • Context Recall
        • Factual Correctness
        • Answer Correctness
      • Python Code Metrics
    • Datasets
    • Evaluation Runs
    • Examples
      • Build Accurate vs. Inaccurate Workflows
  • Examples
    • Building a Search Assistant
      • Approach 1: Single Agent with a Defined Role
      • Approach 2: Adaptive Orchestrator with Multiple Agents
      • Approach 3: Custom Logic Pipeline with a Straightforward Workflow
    • Building a Code Assistant
  • Platform Settings
    • Access Keys
    • Organizations
    • Settings
    • Billing
  • On-premise Deployment
    • AWS
    • IBM
  • Support Center
Powered by GitBook
On this page
  • Step 1: Configuring the Orchestrator and Agents
  • Step 2: Building / Testing the Orchestrator Workflow
  1. Examples
  2. Building a Search Assistant

Approach 2: Adaptive Orchestrator with Multiple Agents

PreviousApproach 1: Single Agent with a Defined RoleNextApproach 3: Custom Logic Pipeline with a Straightforward Workflow

Last updated 6 months ago

For more complex reporting needs and use cases that require nuanced responses, Approach 2 leverages an adaptive orchestrator to coordinate multiple specialized agents. This setup uses two agents:

  1. Searcher Agent (configured in Approach 1): Handles initial information retrieval.

  2. SimpleAgent: Processes the retrieved information and crafts well-structured, coherent reports.

This multi-agent configuration allows for both precise information gathering and thoughtful report generation, making it ideal for use cases that demand high-quality responses and organized output.

Step 1: Configuring the Orchestrator and Agents

The first step involves setting up an orchestrator that will dynamically manage and delegate tasks to the two agents. Here’s a breakdown of their roles:

  • Searcher Agent: Focused solely on retrieving accurate information based on the user’s query.

  • SimpleAgent: Assigned the role of organizing and refining information into clear, cohesive reports.

An example prompt for the SimpleAgent could be:

You are an AI assistant for content refinement, 
focused on improving readability while retaining the original message, 
links, and citations. Your task is to enhance the clarity 
and flow of the text without altering its tone or core details.
Guidelines:
1. **Clarity and Readability**: Adjust sentence structure to improve flow and comprehension.
2. **Grammar and Style**: Correct any errors or awkward phrasing and maintain a consistent tone.
3. **Engaging Vocabulary**: Replace weak words with precise alternatives, keeping the language accessible.
4. **Preserve Links and Citations**: Do not alter hyperlinks or references.
5. **Maintain Journalistic Tone**: Retain all facts, dates, names, and statistics accurately, without adding subjective or casual wording.

Step 2: Building / Testing the Orchestrator Workflow

After defining the agents, we'll move onto designing the orchestrator’s workflow. This workflow includes:

  1. Delegation: The orchestrator sends the user’s query to the Searcher Agent, which performs the search.

  2. Processing: The retrieved information is passed to the SimpleAgent, which organizes it into a structured report.

  3. Output: The final report is sent to the user.

This setup ensures that each agent can focus on its specialized role, resulting in a high-quality response that is both informative and well-presented.

With the orchestrator and agents configured, testing becomes essential to verify that:

  1. Queries are processed correctly by the Searcher Agent

  2. The SimpleAgent refines the results into a structured report

This testing phase can be conducted within the orchestrator interface, where test queries are entered and the orchestrator’s behavior is monitored to ensure seamless collaboration between agents. Logs are reviewed to catch any issues or miscommunications between agents.

As we see in our results, the entire process is managed by the orchestrator, also known as the “Manager Agent.” First, the orchestrator initiates the Searcher Agent to gather data. Afterward, it utilizes the SimpleAgent (acting as the “Writer Agent”) to produce the final report. The orchestrator then consolidates the response, delivering a clear, complete answer to the user.