Prompt Playground
The prompt playground enables you to design, test, and refine prompts by selecting language models, configuring connections, and defining message types. Here’s a step-by-step guide to navigating the playground effectively.



Step 1: Select the LLM
- Obligatory: The first step is to select the LLM from the dropdown. Choose the appropriate LLM that suits your prompt requirements. 
Step 2: Create or Choose a Connection
- After selecting the LLM, you must establish a connection. You can either create a new connection or choose from existing ones: - Click + New connection if a new setup is needed. 
- Alternatively, select from the predefined connections if available. 
 
Step 3: Specify or Choose the Model
- In the Model field, either type the name of the model directly or use auto-selection from suggested options. 
- This step ensures that your prompt is routed to the correct model associated with the chosen LLM and connection. 
Step 4: Writing the Prompt
- In the Prompt section, you can define the user input, responses, and instructions: - Type your prompt text in the Enter text here field. 
- Prompts can include dynamic variables written as - {{variable_name}}. During runtime, these placeholders will be populated with specific values. For example,- {{user_name}}will be replaced by the user’s name if that variable is defined.
 
Step 5: Configure Message Types
- By default, messages are set as User messages. However, you can change the message type as follows: - Select System for messages intended to provide background information or instructions that influence the model’s behavior. 
- Choose Assistant to predefine responses as if they are coming from the assistant itself. 
- This flexibility allows you to simulate various conversation flows, ensuring your model responds correctly based on message type. 
 
Step 6: Adjust Temperature and Max Tokens
- Temperature: Control the randomness of the model’s responses. A lower temperature (e.g., 0.3) makes responses more deterministic, while a higher value introduces variability. 
- Max Tokens: Set the maximum number of tokens (words, punctuation, etc.) for the response. This limits the response length, ensuring outputs are concise when necessary. 
Step 7: Using Streamed Responses
- Stream: Toggle the Stream switch on or off based on preference: - On: Responses will be streamed, appearing in real time as they are generated by the model. 
- Off: The entire response will appear at once after generation. 
 
Last updated
