Dynamiq Docs
  • Welcome to Dynamiq
  • Low-Code Builder
    • Chat
    • Basics
    • Connecting Nodes
    • Conditional Nodes and Multiple Outputs
    • Input and Output Transformers
    • Error Handling and Retries
    • LLM Nodes
    • Validator Nodes
    • RAG Nodes
      • Indexing Workflow
        • Pre-processing Nodes
        • Document Splitting
        • Document Embedders
        • Document Writers
      • Inference RAG workflow
        • Text embedders
        • Document retrievers
          • Complex retrievers
        • LLM Answer Generators
    • LLM Agents
      • Basics
      • Guide to Implementing LLM Agents: ReAct and Simple Agents
      • Guide to Agent Orchestration: Linear and Adaptive Orchestrators
      • Guide to Advanced Agent Orchestration: Graph Orchestrator
    • Audio and voice
    • Tools and External Integrations
    • Python Code in Workflows
    • Memory
    • Guardrails
  • Deployments
    • Workflows
      • Tracing Workflow Execution
    • LLMs
      • Fine-tuned Adapters
      • Supported Models
    • Vector Databases
  • Prompts
    • Prompt Playground
  • Connections
  • LLM Fine-tuning
    • Basics
    • Using Adapters
    • Preparing Data
    • Supported Models
    • Parameters Guide
  • Knowledge Bases
  • Evaluations
    • Metrics
      • LLM-as-a-Judge
      • Predefined metrics
        • Faithfulness
        • Context Precision
        • Context Recall
        • Factual Correctness
        • Answer Correctness
      • Python Code Metrics
    • Datasets
    • Evaluation Runs
    • Examples
      • Build Accurate vs. Inaccurate Workflows
  • Examples
    • Building a Search Assistant
      • Approach 1: Single Agent with a Defined Role
      • Approach 2: Adaptive Orchestrator with Multiple Agents
      • Approach 3: Custom Logic Pipeline with a Straightforward Workflow
    • Building a Code Assistant
  • Platform Settings
    • Access Keys
    • Organizations
    • Settings
    • Billing
  • On-premise Deployment
    • AWS
    • IBM
  • Support Center
Powered by GitBook
On this page
  • Input Transformers
  • Output Transformers
  1. Low-Code Builder

Input and Output Transformers

PreviousConditional Nodes and Multiple OutputsNextError Handling and Retries

Last updated 3 months ago

Input Transformers

Input transformers are used to map the outputs of the previous node to the inputs of the current node.

By selecting a node and clicking on the Input tab, we can add custom logic to the input transformer.

The input parameters of the OpenAI Node will be mapped with the input parameters of the Input Node.

Input transformers use the next format of JSONPath syntax:

$.{node_name}.output.{parameter_name}

Output Transformers

Output transformers serve a similar purpose as input transformers but are applied to the output of the nodes. They are particularly useful in cases where the output is shared among multiple nodes. This allows us to write the output transformer once, avoiding the need to create individual Input Transformers for each node that receives this output.

You can specify the logic for output transformers in the Output tab.

In this example, the ScaleSerp Node returns a dictionary of data containing both output and metadata. By using output transformers, we can extract only the results found under the result key.

Output transformers are also useful for ensuring proper parsing when passing an output to the Output Node that does not include an input transformer.

In a workflow, if nodes are connected directly, an input transformer field will be empty and must to be written manually. However, if the parameters of the nodes are connected, the appropriate mapping will be automatically established - .

here's an example
Example of usage of Output Transformers