# LLM Answer Generators

### Configuring the LLM Node for RAG in Dynamiq

In this section, we'll guide you through setting up the LLM node within Dynamiq's Workflow Builder for a Retrieval-Augmented Generation (RAG) application. This involves selecting an LLM provider, configuring parameters, and integrating the node into your workflow.

**Step 1: Select an LLM Provider**

Dynamiq offers a wide range of LLM providers. Choose the one that best fits your needs from the list:

<figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2FVYuCEW3Bz6hSyk5Eekc8%2Fimage.png?alt=media&#x26;token=e297695a-ef00-415d-9b02-868d29b420c0" alt="" width="249"><figcaption></figcaption></figure>

**Step 2: Configure the LLM Node**

Once you've selected a provider, configure the LLM node:

<figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2FE8lb2FKbw6szAxU2QvJK%2Fimage.png?alt=media&#x26;token=125425e1-d82c-4fb2-8236-092465e9ae49" alt="" width="375"><figcaption></figcaption></figure>

1. **Connection Configuration:**
   * Name your node for easy identification.
   * Establish a connection using the provider's API keys.
2. **Prompt Configuration:**

<figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2FPP4CBoW12HUt2Bveu2St%2Fimage.png?alt=media&#x26;token=0cbf71b8-cd63-45ec-a320-080664557956" alt="" width="375"><figcaption></figcaption></figure>

* Use the Prompt Library or create an Inline Prompt.
* Example prompt for question answering:

  <pre class="language-jinja"><code class="lang-jinja">Please answer the following question based on the information found
  within the sections enclosed by triplet quotes (\`\`\`).
  Your response should be concise, well-written, and follow markdown formatting guidelines:

    - Use bullet points for list items.
    - Use **bold** text for emphasis where necessary.
  <strong>
  </strong><strong>**Question:** {{query}}
  </strong>
  Thank you for your detailed attention to the request
  **Context information**:
  ```
    {% for document in documents %}
        ---
        Document title: {{ document.metadata["title"] }}
        Document information: {{ document.content }}
        ---
    {% endfor %}
  ```

  **User Question:** {{query}}
  Answer:
  </code></pre>

3\. **Core Parameters:**

* **Model:** Select the appropriate model.
* **Temperature:** Set for randomness (e.g., 0.2 for deterministic, 0.7 for creative).
* **Max Tokens:** Define the maximum output length.
* **Streaming:** Enable if real-time feedback is needed.

**Step 3: Input Transformation**

<figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2FdF7c7QXh5fhVxKGux0ml%2Fimage.png?alt=media&#x26;token=6d7c592b-f9b7-4f06-89d2-e0d128950e10" alt=""><figcaption></figcaption></figure>

To use documents in your prompt, map the output from the retriever node to the LLM node:

* Use JSONPath syntax in the Input Transformer section:

  ```json
  {
      "documents":"$.weaviate-retriever.output.documents",
      "query":"$.input.output.question"
  }
  ```

**Step 4: Connect the Output**

Finally, connect the LLM node's output to the Output Node or any other node as required:

* Ensure the content from the LLM node is properly routed to display or further processing.

### **Additional Tips**

* **Streaming Responses:** Enable streaming for applications requiring immediate feedback.
* **Prompt Design:** Use Jinja templates to dynamically incorporate document metadata into prompts.

By following these steps, you can effectively set up the LLM node in your RAG workflow, ensuring accurate and contextually relevant responses.
