Tracing Workflow Execution
Last updated
Last updated
Tracing is a powerful feature in Dynamiq that allows users to inspect each step of a workflow’s execution. This feature is invaluable for debugging, optimizing, and understanding workflow performance. By viewing the Run Tree, users can observe how data flows through each node, inspect inputs and outputs at each stage, and gain insights through metadata such as token usage and execution time.
After running a test or observing an actual workflow execution, you can view the detailed trace by navigating to the Logs tab in your deployment’s details. Select a specific trace to open the Run Tree view, where you’ll see a visual representation of the workflow's nodes and data flow.
The Run Tree presents a visual, step-by-step breakdown of the workflow execution. There are two main views available:
Graph View: Displays a visual flowchart of the workflow nodes, showing how each node processes input data and passes it to the next.
Tree View: Organizes the nodes in a hierarchical, text-based format, useful for seeing the order and dependencies between nodes.
In Graph View, each node represents a step in the workflow, such as an input processing stage, a model call, or an output generation. You can click on any node to inspect the data it processed. For example, selecting any node reveals:
Input: The data that entered this node.
Output: The result produced by this node is then passed to the next step.
Metadata: The trace also provides additional metadata for each node, such as:
Execution Time: Duration in seconds for the node to process the input.
Token Usage: Shows the total number of tokens used, including prompt tokens and completion tokens, as well as the associated costs.
Status: Indicates whether the node completed successfully or encountered any issues.
For example, if select OpenAI node, you’ll see:
Total Tokens: Total number of tokens processed.
Prompt Tokens: Tokens from the input prompt.
Completion Tokens: Tokens generated by the model in the response.
Cost: Estimated cost based on token usage.
These insights are especially valuable for optimizing workflows, as they allow you to identify bottlenecks, high-cost stages, and areas where processing time could be improved.
With the tracing feature, users can systematically inspect each node to verify that the workflow logic is functioning as expected:
Node-by-Node Verification: Tracing lets you see exactly how each node transforms the data, making it easier to spot errors or unexpected results.
Error Diagnosis: If a node fails or produces an incorrect output, you can use the trace to backtrack and identify where the problem occurred. This could be due to incorrect input, misconfigured logic, or an unexpected response from an external API.
Optimizing Token Usage: Reviewing token usage and costs at each step enables you to identify nodes where prompts could be optimized, helping reduce costs for workflows involving language models.