Dynamiq Docs
  • Welcome to Dynamiq
  • Low-Code Builder
    • Chat
    • Basics
    • Connecting Nodes
    • Conditional Nodes and Multiple Outputs
    • Input and Output Transformers
    • Error Handling and Retries
    • LLM Nodes
    • Validator Nodes
    • RAG Nodes
      • Indexing Workflow
        • Pre-processing Nodes
        • Document Splitting
        • Document Embedders
        • Document Writers
      • Inference RAG workflow
        • Text embedders
        • Document retrievers
          • Complex retrievers
        • LLM Answer Generators
    • LLM Agents
      • Basics
      • Guide to Implementing LLM Agents: ReAct and Simple Agents
      • Guide to Agent Orchestration: Linear and Adaptive Orchestrators
      • Guide to Advanced Agent Orchestration: Graph Orchestrator
    • Audio and voice
    • Tools and External Integrations
    • Python Code in Workflows
    • Memory
    • Guardrails
  • Deployments
    • Workflows
      • Tracing Workflow Execution
    • LLMs
      • Fine-tuned Adapters
      • Supported Models
    • Vector Databases
  • Prompts
    • Prompt Playground
  • Connections
  • LLM Fine-tuning
    • Basics
    • Using Adapters
    • Preparing Data
    • Supported Models
    • Parameters Guide
  • Knowledge Bases
  • Evaluations
    • Metrics
      • LLM-as-a-Judge
      • Predefined metrics
        • Faithfulness
        • Context Precision
        • Context Recall
        • Factual Correctness
        • Answer Correctness
      • Python Code Metrics
    • Datasets
    • Evaluation Runs
    • Examples
      • Build Accurate vs. Inaccurate Workflows
  • Examples
    • Building a Search Assistant
      • Approach 1: Single Agent with a Defined Role
      • Approach 2: Adaptive Orchestrator with Multiple Agents
      • Approach 3: Custom Logic Pipeline with a Straightforward Workflow
    • Building a Code Assistant
  • Platform Settings
    • Access Keys
    • Organizations
    • Settings
    • Billing
  • On-premise Deployment
    • AWS
    • IBM
  • Support Center
Powered by GitBook
On this page
  • Configuring the LLM Node for RAG in Dynamiq
  • Additional Tips
  1. Low-Code Builder
  2. RAG Nodes
  3. Inference RAG workflow

LLM Answer Generators

PreviousComplex retrieversNextLLM Agents

Last updated 6 months ago

Configuring the LLM Node for RAG in Dynamiq

In this section, we'll guide you through setting up the LLM node within Dynamiq's Workflow Builder for a Retrieval-Augmented Generation (RAG) application. This involves selecting an LLM provider, configuring parameters, and integrating the node into your workflow.

Step 1: Select an LLM Provider

Dynamiq offers a wide range of LLM providers. Choose the one that best fits your needs from the list:

Step 2: Configure the LLM Node

Once you've selected a provider, configure the LLM node:

  1. Connection Configuration:

    • Name your node for easy identification.

    • Establish a connection using the provider's API keys.

  2. Prompt Configuration:

  • Use the Prompt Library or create an Inline Prompt.

  • Example prompt for question answering:

    Please answer the following question based on the information found
    within the sections enclosed by triplet quotes (\`\`\`).
    Your response should be concise, well-written, and follow markdown formatting guidelines:
    
      - Use bullet points for list items.
      - Use **bold** text for emphasis where necessary.
    
    **Question:** {{query}}
    
    Thank you for your detailed attention to the request
    **Context information**:
    ```
      {% for document in documents %}
          ---
          Document title: {{ document.metadata["title"] }}
          Document information: {{ document.content }}
          ---
      {% endfor %}
    ```
    
    **User Question:** {{query}}
    Answer:

3. Core Parameters:

  • Model: Select the appropriate model.

  • Temperature: Set for randomness (e.g., 0.2 for deterministic, 0.7 for creative).

  • Max Tokens: Define the maximum output length.

  • Streaming: Enable if real-time feedback is needed.

Step 3: Input Transformation

To use documents in your prompt, map the output from the retriever node to the LLM node:

  • Use JSONPath syntax in the Input Transformer section:

    {
        "documents":"$.weaviate-retriever.output.documents",
        "query":"$.input.output.question"
    }

Step 4: Connect the Output

Finally, connect the LLM node's output to the Output Node or any other node as required:

  • Ensure the content from the LLM node is properly routed to display or further processing.

Additional Tips

  • Streaming Responses: Enable streaming for applications requiring immediate feedback.

  • Prompt Design: Use Jinja templates to dynamically incorporate document metadata into prompts.

By following these steps, you can effectively set up the LLM node in your RAG workflow, ensuring accurate and contextually relevant responses.