Dynamiq Docs
  • Welcome to Dynamiq
  • Low-Code Builder
    • Chat
    • Basics
    • Connecting Nodes
    • Conditional Nodes and Multiple Outputs
    • Input and Output Transformers
    • Error Handling and Retries
    • LLM Nodes
    • Validator Nodes
    • RAG Nodes
      • Indexing Workflow
        • Pre-processing Nodes
        • Document Splitting
        • Document Embedders
        • Document Writers
      • Inference RAG workflow
        • Text embedders
        • Document retrievers
          • Complex retrievers
        • LLM Answer Generators
    • LLM Agents
      • Basics
      • Guide to Implementing LLM Agents: ReAct and Simple Agents
      • Guide to Agent Orchestration: Linear and Adaptive Orchestrators
      • Guide to Advanced Agent Orchestration: Graph Orchestrator
    • Audio and voice
    • Tools and External Integrations
    • Python Code in Workflows
    • Memory
    • Guardrails
  • Deployments
    • Workflows
      • Tracing Workflow Execution
    • LLMs
      • Fine-tuned Adapters
      • Supported Models
    • Vector Databases
  • Prompts
    • Prompt Playground
  • Connections
  • LLM Fine-tuning
    • Basics
    • Using Adapters
    • Preparing Data
    • Supported Models
    • Parameters Guide
  • Knowledge Bases
  • Evaluations
    • Metrics
      • LLM-as-a-Judge
      • Predefined metrics
        • Faithfulness
        • Context Precision
        • Context Recall
        • Factual Correctness
        • Answer Correctness
      • Python Code Metrics
    • Datasets
    • Evaluation Runs
    • Examples
      • Build Accurate vs. Inaccurate Workflows
  • Examples
    • Building a Search Assistant
      • Approach 1: Single Agent with a Defined Role
      • Approach 2: Adaptive Orchestrator with Multiple Agents
      • Approach 3: Custom Logic Pipeline with a Straightforward Workflow
    • Building a Code Assistant
  • Platform Settings
    • Access Keys
    • Organizations
    • Settings
    • Billing
  • On-premise Deployment
    • AWS
    • IBM
  • Support Center
Powered by GitBook
On this page
  • Step 1: Selecting the Agent and Setting Up the Environment
  • Step 2: Defining the Agent’s Role
  • Step 3: Testing the Workflow
  • Step 4: Deploying the Workflow
  1. Examples
  2. Building a Search Assistant

Approach 1: Single Agent with a Defined Role

PreviousBuilding a Search AssistantNextApproach 2: Adaptive Orchestrator with Multiple Agents

Last updated 6 months ago

In this first approach, we’ll focus on setting up a single, goal-oriented agent using a “React Agent” model. This approach is best suited for simple, focused use cases where a single agent can handle the entire search and response process.

Step 1: Selecting the Agent and Setting Up the Environment

We start by selecting a React Agent, a popular choice for single-agent tasks due to its adaptability and efficiency. The setup will involve:

  • Defining the agent’s role clearly.

  • Configuring the necessary tools for the agent, such as a search tool.

  • Testing and deploying the agent workflow.

Here’s an example configuration:

  • Role: To retrieve accurate information in response to a user’s query

  • Agent: React Agent

  • Tool: SERP (Search Engine Results Page) tool for external search

Step 2: Defining the Agent’s Role

Once the agent is selected, we set a clear, specific role that guides the agent's behavior. In this case, the agent’s role could be as simple as:

You are an expert AI assistant focused on refining user queries, conducting thorough searches, and delivering well-sourced answers. Your goal is to help users by clarifying their questions, performing detailed searches using available tools, and presenting accurate responses with proper citations and direct links, using markdown. Prioritize accuracy, clarity, and credibility in all answers. If information is conflicting or unclear, note this and suggest further research options.

Step 3: Testing the Workflow

Testing is an essential part of the setup process. After connecting the nodes, we must test the endpoint to verify that:

  1. The agent receives the query

  2. The response is returned correctly

Using a test interface allows us to monitor the logs and refine the workflow as necessary.

We select a react agent, we add a serp tool for searching we craft the role for agent for example like here: """ role agent", then we connect inputs between nodes and outputs. Firstly we are going to test this workflow by testing endpoint and view how it works

Step 4: Deploying the Workflow

After testing, we deploy the workflow, making it accessible via API or any compatible UI. This deployment allows integration with applications built in Python, Java, C, or any other language that supports HTTP requests.

We can continue testing in the UI to observe real-time responses and logs, ensuring smooth and reliable operation.

Creating the searcher agent
Configuring the searcher agent