Dynamiq Docs
  • Welcome to Dynamiq
  • Low-Code Builder
    • Chat
    • Basics
    • Connecting Nodes
    • Conditional Nodes and Multiple Outputs
    • Input and Output Transformers
    • Error Handling and Retries
    • LLM Nodes
    • Validator Nodes
    • RAG Nodes
      • Indexing Workflow
        • Pre-processing Nodes
        • Document Splitting
        • Document Embedders
        • Document Writers
      • Inference RAG workflow
        • Text embedders
        • Document retrievers
          • Complex retrievers
        • LLM Answer Generators
    • LLM Agents
      • Basics
      • Guide to Implementing LLM Agents: ReAct and Simple Agents
      • Guide to Agent Orchestration: Linear and Adaptive Orchestrators
      • Guide to Advanced Agent Orchestration: Graph Orchestrator
    • Audio and voice
    • Tools and External Integrations
    • Python Code in Workflows
    • Memory
    • Guardrails
  • Deployments
    • Workflows
      • Tracing Workflow Execution
    • LLMs
      • Fine-tuned Adapters
      • Supported Models
    • Vector Databases
  • Prompts
    • Prompt Playground
  • Connections
  • LLM Fine-tuning
    • Basics
    • Using Adapters
    • Preparing Data
    • Supported Models
    • Parameters Guide
  • Knowledge Bases
  • Evaluations
    • Metrics
      • LLM-as-a-Judge
      • Predefined metrics
        • Faithfulness
        • Context Precision
        • Context Recall
        • Factual Correctness
        • Answer Correctness
      • Python Code Metrics
    • Datasets
    • Evaluation Runs
    • Examples
      • Build Accurate vs. Inaccurate Workflows
  • Examples
    • Building a Search Assistant
      • Approach 1: Single Agent with a Defined Role
      • Approach 2: Adaptive Orchestrator with Multiple Agents
      • Approach 3: Custom Logic Pipeline with a Straightforward Workflow
    • Building a Code Assistant
  • Platform Settings
    • Access Keys
    • Organizations
    • Settings
    • Billing
  • On-premise Deployment
    • AWS
    • IBM
  • Support Center
Powered by GitBook
On this page
  • Example Workflows to Showcase Evaluation Power
  • Upcoming Examples
  1. Evaluations

Examples

Example Workflows to Showcase Evaluation Power

In this section, we will present a series of examples that illustrate how to create workflows designed to highlight the evaluation capabilities of Dynamiq. These workflows will serve as practical demonstrations of how to leverage evaluation metrics effectively in different scenarios.

Upcoming Examples

The following subpages will feature detailed examples for creating various types of workflows, including:

  1. Accurate Workflow:

    • Learn how to set up a workflow that consistently generates accurate answers, allowing you to evaluate the effectiveness of your metrics.

  2. Inaccurate Workflow:

    • Explore the creation of a workflow that produces intentionally incorrect answers. This workflow will help demonstrate how well your evaluation metrics can identify inaccuracies.

  3. RAG Workflow:

    • Gain insights into building a Retrieval-Augmented Generation (RAG) workflow, showcasing the integration of retrieval and generation processes along with their evaluation.

  4. Benchmarking LLMs:

    • Understand how to create workflows for benchmarking different Large Language Models (LLMs), providing a framework for comparing their performance across various tasks.

Each of these examples will provide step-by-step instructions, allowing you to replicate and adapt the workflows to your specific needs. By following these guides, you'll gain a deeper understanding of how to utilize Dynamiq’s evaluation framework to enhance your AI applications.

PreviousEvaluation RunsNextBuild Accurate vs. Inaccurate Workflows

Last updated 1 month ago