Dynamiq Docs
  • Welcome to Dynamiq
  • Low-Code Builder
    • Chat
    • Basics
    • Connecting Nodes
    • Conditional Nodes and Multiple Outputs
    • Input and Output Transformers
    • Error Handling and Retries
    • LLM Nodes
    • Validator Nodes
    • RAG Nodes
      • Indexing Workflow
        • Pre-processing Nodes
        • Document Splitting
        • Document Embedders
        • Document Writers
      • Inference RAG workflow
        • Text embedders
        • Document retrievers
          • Complex retrievers
        • LLM Answer Generators
    • LLM Agents
      • Basics
      • Guide to Implementing LLM Agents: ReAct and Simple Agents
      • Guide to Agent Orchestration: Linear and Adaptive Orchestrators
      • Guide to Advanced Agent Orchestration: Graph Orchestrator
    • Audio and voice
    • Tools and External Integrations
    • Python Code in Workflows
    • Memory
    • Guardrails
  • Deployments
    • Workflows
      • Tracing Workflow Execution
    • LLMs
      • Fine-tuned Adapters
      • Supported Models
    • Vector Databases
  • Prompts
    • Prompt Playground
  • Connections
  • LLM Fine-tuning
    • Basics
    • Using Adapters
    • Preparing Data
    • Supported Models
    • Parameters Guide
  • Knowledge Bases
  • Evaluations
    • Metrics
      • LLM-as-a-Judge
      • Predefined metrics
        • Faithfulness
        • Context Precision
        • Context Recall
        • Factual Correctness
        • Answer Correctness
      • Python Code Metrics
    • Datasets
    • Evaluation Runs
    • Examples
      • Build Accurate vs. Inaccurate Workflows
  • Examples
    • Building a Search Assistant
      • Approach 1: Single Agent with a Defined Role
      • Approach 2: Adaptive Orchestrator with Multiple Agents
      • Approach 3: Custom Logic Pipeline with a Straightforward Workflow
    • Building a Code Assistant
  • Platform Settings
    • Access Keys
    • Organizations
    • Settings
    • Billing
  • On-premise Deployment
    • AWS
    • IBM
  • Support Center
Powered by GitBook
On this page
  • Step 1: Select the LLM
  • Step 2: Create or Choose a Connection
  • Step 3: Specify or Choose the Model
  • Step 4: Writing the Prompt
  • Step 5: Configure Message Types
  • Step 6: Adjust Temperature and Max Tokens
  • Step 7: Using Streamed Responses
  1. Prompts

Prompt Playground

PreviousPromptsNextConnections

Last updated 6 months ago

The prompt playground enables you to design, test, and refine prompts by selecting language models, configuring connections, and defining message types. Here’s a step-by-step guide to navigating the playground effectively.

Step 1: Select the LLM

  • Obligatory: The first step is to select the LLM from the dropdown. Choose the appropriate LLM that suits your prompt requirements.

Step 2: Create or Choose a Connection

  • After selecting the LLM, you must establish a connection. You can either create a new connection or choose from existing ones:

    • Click + New connection if a new setup is needed.

    • Alternatively, select from the predefined connections if available.

Step 3: Specify or Choose the Model

  • In the Model field, either type the name of the model directly or use auto-selection from suggested options.

  • This step ensures that your prompt is routed to the correct model associated with the chosen LLM and connection.

Step 4: Writing the Prompt

  • In the Prompt section, you can define the user input, responses, and instructions:

    • Type your prompt text in the Enter text here field.

    • Prompts can include dynamic variables written as {{variable_name}}. During runtime, these placeholders will be populated with specific values. For example, {{user_name}} will be replaced by the user’s name if that variable is defined.

Step 5: Configure Message Types

  • By default, messages are set as User messages. However, you can change the message type as follows:

    • Select System for messages intended to provide background information or instructions that influence the model’s behavior.

    • Choose Assistant to predefine responses as if they are coming from the assistant itself.

    • This flexibility allows you to simulate various conversation flows, ensuring your model responds correctly based on message type.

Step 6: Adjust Temperature and Max Tokens

  • Temperature: Control the randomness of the model’s responses. A lower temperature (e.g., 0.3) makes responses more deterministic, while a higher value introduces variability.

  • Max Tokens: Set the maximum number of tokens (words, punctuation, etc.) for the response. This limits the response length, ensuring outputs are concise when necessary.

Step 7: Using Streamed Responses

  • Stream: Toggle the Stream switch on or off based on preference:

    • On: Responses will be streamed, appearing in real time as they are generated by the model.

    • Off: The entire response will appear at once after generation.

Select LLM and choose connection and model
Write and configure message prompts
Test prompts output