# Prompt Playground

The prompt playground enables you to design, test, and refine prompts by selecting language models, configuring connections, and defining message types. Here’s a step-by-step guide to navigating the playground effectively.

<figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2FOunDMNUHxobNhpwqUmpZ%2Fprompt-playground-select-model.gif?alt=media&#x26;token=89287378-7197-4ef8-a8b9-8459b4fcb8fa" alt=""><figcaption><p>Select LLM and choose connection and model</p></figcaption></figure>

<figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2FAZBMjY36iIfIVztwpTjk%2Fprompt-playground-create-prompts.gif?alt=media&#x26;token=36a1bd0e-43f7-4086-bd78-f965b0511a8c" alt=""><figcaption><p>Write and configure message prompts</p></figcaption></figure>

<figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2FLAZtOVSLNG2QxwqwLXnK%2Fprompt-playground-test-prompts.gif?alt=media&#x26;token=72626bc1-d1b8-46c0-826b-8a5d4454b272" alt=""><figcaption><p>Test prompts output</p></figcaption></figure>

### **Step 1: Select the LLM**

* **Obligatory**: The first step is to select the **LLM** from the dropdown. Choose the appropriate LLM that suits your prompt requirements.

### **Step 2: Create or Choose a Connection**

* After selecting the LLM, you must establish a **connection**. You can either create a new connection or choose from existing ones:
  * Click **+ New connection** if a new setup is needed.
  * Alternatively, select from the predefined connections if available.

### **Step 3: Specify or Choose the Model**

* In the **Model** field, either type the name of the model directly or use auto-selection from suggested options.
* This step ensures that your prompt is routed to the correct model associated with the chosen LLM and connection.

### **Step 4: Writing the Prompt**

* In the **Prompt** section, you can define the user input, responses, and instructions:
  * Type your prompt text in the **Enter text here** field.
  * Prompts can include **dynamic variables** written as `{{variable_name}}`. During runtime, these placeholders will be populated with specific values. For example, `{{user_name}}` will be replaced by the user’s name if that variable is defined.

### **Step 5: Configure Message Types**

* By default, messages are set as **User** messages. However, you can change the message type as follows:
  * Select **System** for messages intended to provide background information or instructions that influence the model’s behavior.
  * Choose **Assistant** to predefine responses as if they are coming from the assistant itself.
  * This flexibility allows you to simulate various conversation flows, ensuring your model responds correctly based on message type.

### **Step 6: Adjust Temperature and Max Tokens**

* **Temperature**: Control the randomness of the model’s responses. A lower temperature (e.g., 0.3) makes responses more deterministic, while a higher value introduces variability.
* **Max Tokens**: Set the maximum number of tokens (words, punctuation, etc.) for the response. This limits the response length, ensuring outputs are concise when necessary.

### **Step 7: Using Streamed Responses**

* **Stream**: Toggle the **Stream** switch on or off based on preference:
  * **On**: Responses will be streamed, appearing in real time as they are generated by the model.
  * **Off**: The entire response will appear at once after generation.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.getdynamiq.ai/prompts/prompt-playground.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
