LLM Answer Generators
Last updated
Last updated
In this section, we'll guide you through setting up the LLM node within Dynamiq's Workflow Builder for a Retrieval-Augmented Generation (RAG) application. This involves selecting an LLM provider, configuring parameters, and integrating the node into your workflow.
Step 1: Select an LLM Provider
Dynamiq offers a wide range of LLM providers. Choose the one that best fits your needs from the list:
Step 2: Configure the LLM Node
Once you've selected a provider, configure the LLM node:
Connection Configuration:
Name your node for easy identification.
Establish a connection using the provider's API keys.
Prompt Configuration:
Use the Prompt Library or create an Inline Prompt.
Example prompt for question answering:
3. Core Parameters:
Model: Select the appropriate model.
Temperature: Set for randomness (e.g., 0.2 for deterministic, 0.7 for creative).
Max Tokens: Define the maximum output length.
Streaming: Enable if real-time feedback is needed.
Step 3: Input Transformation
To use documents in your prompt, map the output from the retriever node to the LLM node:
Use JSONPath syntax in the Input Transformer section:
Step 4: Connect the Output
Finally, connect the LLM node's output to the Output Node or any other node as required:
Ensure the content from the LLM node is properly routed to display or further processing.
Streaming Responses: Enable streaming for applications requiring immediate feedback.
Prompt Design: Use Jinja templates to dynamically incorporate document metadata into prompts.
By following these steps, you can effectively set up the LLM node in your RAG workflow, ensuring accurate and contextually relevant responses.