Dynamiq Docs
  • Welcome to Dynamiq
  • Low-Code Builder
    • Chat
    • Basics
    • Connecting Nodes
    • Conditional Nodes and Multiple Outputs
    • Input and Output Transformers
    • Error Handling and Retries
    • LLM Nodes
    • Validator Nodes
    • RAG Nodes
      • Indexing Workflow
        • Pre-processing Nodes
        • Document Splitting
        • Document Embedders
        • Document Writers
      • Inference RAG workflow
        • Text embedders
        • Document retrievers
          • Complex retrievers
        • LLM Answer Generators
    • LLM Agents
      • Basics
      • Guide to Implementing LLM Agents: ReAct and Simple Agents
      • Guide to Agent Orchestration: Linear and Adaptive Orchestrators
      • Guide to Advanced Agent Orchestration: Graph Orchestrator
    • Audio and voice
    • Tools and External Integrations
    • Python Code in Workflows
    • Memory
    • Guardrails
  • Deployments
    • Workflows
      • Tracing Workflow Execution
    • LLMs
      • Fine-tuned Adapters
      • Supported Models
    • Vector Databases
  • Prompts
    • Prompt Playground
  • Connections
  • LLM Fine-tuning
    • Basics
    • Using Adapters
    • Preparing Data
    • Supported Models
    • Parameters Guide
  • Knowledge Bases
  • Evaluations
    • Metrics
      • LLM-as-a-Judge
      • Predefined metrics
        • Faithfulness
        • Context Precision
        • Context Recall
        • Factual Correctness
        • Answer Correctness
      • Python Code Metrics
    • Datasets
    • Evaluation Runs
    • Examples
      • Build Accurate vs. Inaccurate Workflows
  • Examples
    • Building a Search Assistant
      • Approach 1: Single Agent with a Defined Role
      • Approach 2: Adaptive Orchestrator with Multiple Agents
      • Approach 3: Custom Logic Pipeline with a Straightforward Workflow
    • Building a Code Assistant
  • Platform Settings
    • Access Keys
    • Organizations
    • Settings
    • Billing
  • On-premise Deployment
    • AWS
    • IBM
  • Support Center
Powered by GitBook
On this page
  • Using Adapters
  • Usage
  1. LLM Fine-tuning

Using Adapters

PreviousBasicsNextPreparing Data

Last updated 5 months ago

Using Adapters

If the fine-tuning job finishes successfully, you can view additional information about the fine-tuned adapter by clicking on the adapter with the trained status. For each adapter of this type, you can see the following details:

  • Adapter Name: The name of the fine-tuned adapter.

  • Adapter Status: The status of the fine-tuning job (trained, training, failed).

  • Adapter Alias: The unique identifier for the fine-tuned adapter.

  • Created By: The user who created the fine-tuned adapter.

  • Creation Date: The date and time when the fine-tuned adapter was created.

When clicking on the adapter, you can see the details of the fine-tuned adapter, including the adapter alias, the model it was fine-tuned on, and when it was deployed. You can also see the code example for using this adapter in your API request.

Here is an example of the adapter details page:

Usage

To use a fine-tuned adapter, modify the model parameter in your API request to reference the specific adapter's name. You should follow the standard format dynamiq/adapters/{adapter-alias} for each of the adapters you want to use.

You can use the code example available for each adapter on the adapter details page (when clicking on it), or use the one provided for the base model on the deployment page. Either way, make sure to change the model parameter to the adapter alias you want to use in a format of dynamiq/adapters/{adapter-alias}.

For instance,

  • Adapter alias: mistral-lora-test-v2v3e7op

  • Model parameter: dynamiq/adapters/mistral-lora-test-v2v3e7op

  • Request for querying the adapter:

response = client.chat.completions.create(model="dynamiq/adapters/mistral-lora-test-v2v3e7op", messages=messages)

Or

  • Adapter alias: llama-8b-fine-tuning-test-djxmqmwj

  • Model parameter: dynamiq/adapters/llama-8b-fine-tuning-test-djxmqmwj

  • Request for querying the adapter:

response = client.chat.completions.create(model="dynamiq/adapters/llama-8b-fine-tuning-test-djxmqmwj", messages=messages)
Fine-tuning Page with Available Adapters
Adapter Details Page