Prompt Playground
Last updated
Last updated
The prompt playground enables you to design, test, and refine prompts by selecting language models, configuring connections, and defining message types. Here’s a step-by-step guide to navigating the playground effectively.
Obligatory: The first step is to select the LLM from the dropdown. Choose the appropriate LLM that suits your prompt requirements.
After selecting the LLM, you must establish a connection. You can either create a new connection or choose from existing ones:
Click + New connection if a new setup is needed.
Alternatively, select from the predefined connections if available.
In the Model field, either type the name of the model directly or use auto-selection from suggested options.
This step ensures that your prompt is routed to the correct model associated with the chosen LLM and connection.
In the Prompt section, you can define the user input, responses, and instructions:
Type your prompt text in the Enter text here field.
Prompts can include dynamic variables written as {{variable_name}}
. During runtime, these placeholders will be populated with specific values. For example, {{user_name}}
will be replaced by the user’s name if that variable is defined.
By default, messages are set as User messages. However, you can change the message type as follows:
Select System for messages intended to provide background information or instructions that influence the model’s behavior.
Choose Assistant to predefine responses as if they are coming from the assistant itself.
This flexibility allows you to simulate various conversation flows, ensuring your model responds correctly based on message type.
Temperature: Control the randomness of the model’s responses. A lower temperature (e.g., 0.3) makes responses more deterministic, while a higher value introduces variability.
Max Tokens: Set the maximum number of tokens (words, punctuation, etc.) for the response. This limits the response length, ensuring outputs are concise when necessary.
Stream: Toggle the Stream switch on or off based on preference:
On: Responses will be streamed, appearing in real time as they are generated by the model.
Off: The entire response will appear at once after generation.