Dynamiq Docs
  • Welcome to Dynamiq
  • Low-Code Builder
    • Chat
    • Basics
    • Connecting Nodes
    • Conditional Nodes and Multiple Outputs
    • Input and Output Transformers
    • Error Handling and Retries
    • LLM Nodes
    • Validator Nodes
    • RAG Nodes
      • Indexing Workflow
        • Pre-processing Nodes
        • Document Splitting
        • Document Embedders
        • Document Writers
      • Inference RAG workflow
        • Text embedders
        • Document retrievers
          • Complex retrievers
        • LLM Answer Generators
    • LLM Agents
      • Basics
      • Guide to Implementing LLM Agents: ReAct and Simple Agents
      • Guide to Agent Orchestration: Linear and Adaptive Orchestrators
      • Guide to Advanced Agent Orchestration: Graph Orchestrator
    • Audio and voice
    • Tools and External Integrations
    • Python Code in Workflows
    • Memory
    • Guardrails
  • Deployments
    • Workflows
      • Tracing Workflow Execution
    • LLMs
      • Fine-tuned Adapters
      • Supported Models
    • Vector Databases
  • Prompts
    • Prompt Playground
  • Connections
  • LLM Fine-tuning
    • Basics
    • Using Adapters
    • Preparing Data
    • Supported Models
    • Parameters Guide
  • Knowledge Bases
  • Evaluations
    • Metrics
      • LLM-as-a-Judge
      • Predefined metrics
        • Faithfulness
        • Context Precision
        • Context Recall
        • Factual Correctness
        • Answer Correctness
      • Python Code Metrics
    • Datasets
    • Evaluation Runs
    • Examples
      • Build Accurate vs. Inaccurate Workflows
  • Examples
    • Building a Search Assistant
      • Approach 1: Single Agent with a Defined Role
      • Approach 2: Adaptive Orchestrator with Multiple Agents
      • Approach 3: Custom Logic Pipeline with a Straightforward Workflow
    • Building a Code Assistant
  • Platform Settings
    • Access Keys
    • Organizations
    • Settings
    • Billing
  • On-premise Deployment
    • AWS
    • IBM
  • Support Center
Powered by GitBook
On this page
  1. Deployments
  2. Workflows

Tracing Workflow Execution

PreviousWorkflowsNextLLMs

Last updated 6 months ago

Tracing is a powerful feature in Dynamiq that allows users to inspect each step of a workflow’s execution. This feature is invaluable for debugging, optimizing, and understanding workflow performance. By viewing the Run Tree, users can observe how data flows through each node, inspect inputs and outputs at each stage, and gain insights through metadata such as token usage and execution time.

After running a test or observing an actual workflow execution, you can view the detailed trace by navigating to the Logs tab in your deployment’s details. Select a specific trace to open the Run Tree view, where you’ll see a visual representation of the workflow's nodes and data flow.

The Run Tree presents a visual, step-by-step breakdown of the workflow execution. There are two main views available:

  • Graph View: Displays a visual flowchart of the workflow nodes, showing how each node processes input data and passes it to the next.

  • Tree View: Organizes the nodes in a hierarchical, text-based format, useful for seeing the order and dependencies between nodes.

In Graph View, each node represents a step in the workflow, such as an input processing stage, a model call, or an output generation. You can click on any node to inspect the data it processed. For example, selecting any node reveals:

  • Input: The data that entered this node.

  • Output: The result produced by this node is then passed to the next step.

  • Metadata: The trace also provides additional metadata for each node, such as:

    • Execution Time: Duration in seconds for the node to process the input.

    • Token Usage: Shows the total number of tokens used, including prompt tokens and completion tokens, as well as the associated costs.

    • Status: Indicates whether the node completed successfully or encountered any issues.

For example, if select OpenAI node, you’ll see:

  • Total Tokens: Total number of tokens processed.

  • Prompt Tokens: Tokens from the input prompt.

  • Completion Tokens: Tokens generated by the model in the response.

  • Cost: Estimated cost based on token usage.

These insights are especially valuable for optimizing workflows, as they allow you to identify bottlenecks, high-cost stages, and areas where processing time could be improved.

Evaluating Workflow Logic and Debugging

With the tracing feature, users can systematically inspect each node to verify that the workflow logic is functioning as expected:

  • Node-by-Node Verification: Tracing lets you see exactly how each node transforms the data, making it easier to spot errors or unexpected results.

  • Error Diagnosis: If a node fails or produces an incorrect output, you can use the trace to backtrack and identify where the problem occurred. This could be due to incorrect input, misconfigured logic, or an unexpected response from an external API.

  • Optimizing Token Usage: Reviewing token usage and costs at each step enables you to identify nodes where prompts could be optimized, helping reduce costs for workflows involving language models.