Dynamiq Docs
  • Welcome to Dynamiq
  • Low-Code Builder
    • Chat
    • Basics
    • Connecting Nodes
    • Conditional Nodes and Multiple Outputs
    • Input and Output Transformers
    • Error Handling and Retries
    • LLM Nodes
    • Validator Nodes
    • RAG Nodes
      • Indexing Workflow
        • Pre-processing Nodes
        • Document Splitting
        • Document Embedders
        • Document Writers
      • Inference RAG workflow
        • Text embedders
        • Document retrievers
          • Complex retrievers
        • LLM Answer Generators
    • LLM Agents
      • Basics
      • Guide to Implementing LLM Agents: ReAct and Simple Agents
      • Guide to Agent Orchestration: Linear and Adaptive Orchestrators
      • Guide to Advanced Agent Orchestration: Graph Orchestrator
    • Audio and voice
    • Tools and External Integrations
    • Python Code in Workflows
    • Memory
    • Guardrails
  • Deployments
    • Workflows
      • Tracing Workflow Execution
    • LLMs
      • Fine-tuned Adapters
      • Supported Models
    • Vector Databases
  • Prompts
    • Prompt Playground
  • Connections
  • LLM Fine-tuning
    • Basics
    • Using Adapters
    • Preparing Data
    • Supported Models
    • Parameters Guide
  • Knowledge Bases
  • Evaluations
    • Metrics
      • LLM-as-a-Judge
      • Predefined metrics
        • Faithfulness
        • Context Precision
        • Context Recall
        • Factual Correctness
        • Answer Correctness
      • Python Code Metrics
    • Datasets
    • Evaluation Runs
    • Examples
      • Build Accurate vs. Inaccurate Workflows
  • Examples
    • Building a Search Assistant
      • Approach 1: Single Agent with a Defined Role
      • Approach 2: Adaptive Orchestrator with Multiple Agents
      • Approach 3: Custom Logic Pipeline with a Straightforward Workflow
    • Building a Code Assistant
  • Platform Settings
    • Access Keys
    • Organizations
    • Settings
    • Billing
  • On-premise Deployment
    • AWS
    • IBM
  • Support Center
Powered by GitBook
On this page
  • Supported Providers and Models
  • Configuration
  • Input/Output Configuration
  • Best Practices
  • Performance Optimization
  • Advanced Features
  • Configuring a Custom LLM Node
  1. Low-Code Builder

LLM Nodes

PreviousError Handling and RetriesNextValidator Nodes

Last updated 1 month ago

The LLM nodes in Dynamiq allow users to integrate various LLM providers for natural language processing tasks such as text generation, question answering, and language comprehension. Through connections to services like OpenAI, Anthropic, and custom LLMs, users can configure and optimize workflows for various use cases, including customer support automation, content generation, and more.

Supported Providers and Models

Dynamiq supports a variety of LLM providers, enabling users to choose the best model for their specific use case. Each provider offers unique models with varying capabilities, costs, and performance characteristics. Below is a list of the LLM providers integrated within Dynamiq:

Provider
Description
Models
API Documentation

OpenAI

Advanced language models suitable for complex language tasks.

GPT-4o, GPT-4o-mini

Anthropic

Models like Claude for conversational and generative text applications.

Claude 3.5 Sonnet, Claude 3.5 Haiku

Cohere

Text generation and embedding models, often used for content creation and analysis.

Command R+, Command R

Gemini

Models optimized for information retrieval and summarization tasks.

Gemini Pro 1.5, Gemini Flash 1.5

AWS Bedrock

Cloud-based service offering models optimized for enterprise and general-purpose NLP tasks.

Various proprietary models

Groq

Models optimized for high-performance inference, often used in real-time applications.

Llama 3.1, Llama 3.2

Mistral

Specialized models focusing on efficiency in text generation and comprehension.

Mistral Large, Small, Embed

Together AI

Provides collaborative NLP tools for tasks such as summarization and translation.

Llama, Gemma, Mistral models

Hugging Face

Offers a vast repository of models, from general-purpose transformers to specialized NLP models.

Open-source models

IBM WatsonX

IBM's suite of AI tools for enterprise applications, including data analysis and NLP.

Granite

Azure AI

Microsoft's cloud-based language models suitable for enterprise and developer-focused applications.

OpenAI models

Replicate

Models focused on reproducible AI research, useful for scientific and technical applications.

Open-source models

SambaNova

AI models optimized for enterprise-scale NLP and other machine learning applications.

Llama family

Cerebras

AI models tailored for high-performance NLP and deep learning tasks in large-scale environments.

Open-source models

DeepInfra

Specializes in deploying high-performance AI infrastructure with NLP capabilities.

Open-source models

Custom LLM

Allows integration with any OpenAI-compatible or custom-deployed models, ideal for proprietary setups.

Custom models (OpenRouter compatible, self-hosted)

xAI

Specialized models focusing on efficiency in text generation and comprehension.

Grok-3

Perplexity

Specialized models focusing on efficiency in text generation, web-search results

Sonar,


Configuration

1. Setting Up an LLM Node

Each LLM node requires careful configuration to ensure accurate and efficient operation. Follow these steps to set up an LLM node:

Step 1: Connection Configuration

  • Connection: Each LLM node must be linked to its respective service provider.

  • API Keys: Obtain API keys or tokens for each provider by following the documentation links.

Step 2: Prompt Configuration

  • Prompt Library: Dynamiq allows users to select prompts from a library or create inline prompts.

  • Dynamic Prompting: Prompts can be customized based on input parameters to generate diverse responses. This is achieved by inserting parameters directly into the prompt text using the format {{parameter_name}}.

2. Core Parameters

These parameters allow you to control the behavior and performance of the LLM node, optimizing it for various applications:

Parameter
Description
Example Values

Model

Specifies the LLM model to use. The model field supports free-text input with auto-suggestions, allowing for immediate access to new models. Ensure the model name matches the provider’s offerings.

gpt-4o, claude-3-haiku

Temperature

Controls the level of randomness in the model’s output. Lower values (close to 0) make responses more deterministic, suitable for tasks requiring precision. Higher values (close to 1) encourage creative responses, ideal for content generation.

0.2 for deterministic, 0.7 for creative

Max Tokens

Sets the maximum number of tokens the model can generate in its response. Useful for limiting the output length to control costs or meet specific response size requirements.

500, 1000

Streaming

Enables token-by-token streaming of responses, providing real-time feedback. Streaming is recommended for use cases requiring quick insights, such as interactive applications.

Enabled, Disabled

Input/Output Configuration

Input Processing

  • JSONPath Selection: Use JSONPath to filter and structure input data, allowing for precise extraction of information.

  • Prompt Templates: Dynamically create prompts by inserting runtime parameters.

Output Processing

  • Filtering: Filter responses to retrieve only relevant data.

  • Structured Outputs: Dynamiq supports different output formats, including plain text, JSON.

Best Practices

Model Selection

Selecting the right model is crucial for balancing cost, speed, and quality:

  • Complex Tasks: Use more advanced models like GPT-4 or Claude for complex outputs.

  • Cost-Efficiency: Opt for smaller models like GPT-4o-mini / claude haiku for simpler tasks to reduce expenses.

  • Provider-Specific Features: Some providers offer unique features like function calling; refer to provider documentation for details.

Prompt Engineering

Effective prompt design can significantly impact model performance:

  • Clear Instructions: Use specific language to minimize ambiguity.

  • Contextual Information: Include background details to guide the model's response.

  • Testing: Test prompts across various inputs to ensure consistency.

Error Handling

To ensure seamless operation and improve resilience, configure error handling mechanisms within your workflow:

Parameter
Description
Example Values

Interval

Sets the delay (in seconds) before the first retry attempt. Must be greater than 0.

2 seconds

Max Attempts

Specifies the maximum number of retry attempts. Set to 0 or greater to determine retry limits.

3 attempts

Backoff Rate

Multiplier that increases the retry interval for each subsequent attempt. Helps to reduce load on the system progressively. Must be 1 or greater.

2.0

Timeout

Sets a timeout limit (in seconds) for each attempt. If exceeded, the process fails to prevent excessive delays.

10 seconds

Adjust these settings based on workflow requirements and provider limitations to avoid unnecessary delays or costs.

Rate Limit Handling

Consider configuring rate limits to stay compliant with provider-specific quotas. Utilize the backoff rate and interval settings to manage requests dynamically and avoid reaching rate limits, which can lead to throttling or blocked requests.

Performance Optimization

Optimize your LLM node's performance by following these strategies:

  • Response Caching: Enable caching to reduce redundant requests and improve speed.

  • Batching Requests: Group requests to process multiple items simultaneously, improving efficiency.

  • Token Usage Monitoring: Track token consumption to control costs and manage API quotas.

Token Type
Example Input
Cost Calculation

Prompt Tokens

250 tokens

(Prompt tokens / 1000) * Cost per 1K tokens

Completion Tokens

500 tokens

(Completion tokens / 1000) * Cost per 1K tokens


Advanced Features

Inference Modes

Dynamiq LLM nodes offer multiple modes to tailor responses to specific needs:

  • DEFAULT: Standard text-based response generation.

  • STRUCTURED_OUTPUT: Provides structured outputs in JSON format.

Streaming Support

Enable streaming for applications requiring real-time feedback, such as customer support or live content generation.


Configuring a Custom LLM Node

Dynamiq's custom LLM node allows users to integrate models deployed on their own servers or models compatible with OpenAI's API syntax through OpenRouter. This flexibility provides seamless integration for both proprietary and third-party LLMs.

  1. Add Custom LLM Node: Select the Custom LLM node from the panel.

  2. Choose Model: Enter the model name. Dynamiq supports manual input for models not listed in auto-suggestions.

  3. Prompt Configuration: Define prompts using the inline prompt editor or select from the library.

  4. Connect & Test: Connect to your server or OpenRouter and test the configuration.

Use Custom LLM for experimental models or those deployed within secure environments.

OpenAI API Documentation
Anthropic API Documentation
Cohere API Documentation
Google AI API Documentation
AWS Bedrock Documentation
Groq API Documentation
Mistral Documentation
Together AI Documentation
Hugging Face Documentation
IBM WatsonX Documentation
Azure AI Documentation
Replicate Documentation
SambaNova Documentation
Cerebras Documentations
DeepInfra Documentation
xAI
Perplexity
Custom LLM
New connection configuration