Dynamiq Docs
  • Welcome to Dynamiq
  • Low-Code Builder
    • Chat
    • Basics
    • Connecting Nodes
    • Conditional Nodes and Multiple Outputs
    • Input and Output Transformers
    • Error Handling and Retries
    • LLM Nodes
    • Validator Nodes
    • RAG Nodes
      • Indexing Workflow
        • Pre-processing Nodes
        • Document Splitting
        • Document Embedders
        • Document Writers
      • Inference RAG workflow
        • Text embedders
        • Document retrievers
          • Complex retrievers
        • LLM Answer Generators
    • LLM Agents
      • Basics
      • Guide to Implementing LLM Agents: ReAct and Simple Agents
      • Guide to Agent Orchestration: Linear and Adaptive Orchestrators
      • Guide to Advanced Agent Orchestration: Graph Orchestrator
    • Audio and voice
    • Tools and External Integrations
    • Python Code in Workflows
    • Memory
    • Guardrails
  • Deployments
    • Workflows
      • Tracing Workflow Execution
    • LLMs
      • Fine-tuned Adapters
      • Supported Models
    • Vector Databases
  • Prompts
    • Prompt Playground
  • Connections
  • LLM Fine-tuning
    • Basics
    • Using Adapters
    • Preparing Data
    • Supported Models
    • Parameters Guide
  • Knowledge Bases
  • Evaluations
    • Metrics
      • LLM-as-a-Judge
      • Predefined metrics
        • Faithfulness
        • Context Precision
        • Context Recall
        • Factual Correctness
        • Answer Correctness
      • Python Code Metrics
    • Datasets
    • Evaluation Runs
    • Examples
      • Build Accurate vs. Inaccurate Workflows
  • Examples
    • Building a Search Assistant
      • Approach 1: Single Agent with a Defined Role
      • Approach 2: Adaptive Orchestrator with Multiple Agents
      • Approach 3: Custom Logic Pipeline with a Straightforward Workflow
    • Building a Code Assistant
  • Platform Settings
    • Access Keys
    • Organizations
    • Settings
    • Billing
  • On-premise Deployment
    • AWS
    • IBM
  • Support Center
Powered by GitBook
On this page
  • The Fine-Tuning Process
  • Step 1: Access the Fine-Tuning Section
  • Step 2: Create a New Fine-Tuning Job
  • Step 3: Configure the Fine-Tuning Job Details
  • Step 4: Upload Training Data
  • Step 5: Initiate Fine-Tuning
  • Step 6: Monitor Fine-Tuning Job
  • Step 7: Review and Deploy
  1. LLM Fine-tuning

Basics

PreviousLLM Fine-tuningNextUsing Adapters

Last updated 6 months ago

The Fine-Tuning Process

The fine-tuning process using Dynamiq involves the following steps:

Step 1: Access the Fine-Tuning Section

  1. In the main dashboard, navigate to the Fine-tuning tab. This section contains options for both Adapters and Jobs.

  2. Open the Jobs tab, where you can view a list of all existing fine-tuning jobs. If this is your first time in this section, no jobs will be listed, and you'll see a message prompting you to create a fine-tuning job.

Step 2: Create a New Fine-Tuning Job

  1. Click on the + Create a fine-tuning job button to initiate a new job.

  2. This will open a sidebar or pop-up window labeled Add new fine-tuned model, where you can specify the details of your fine-tuning job.

Step 3: Configure the Fine-Tuning Job Details

  1. Name: Enter a name for the fine-tuning job. For example, gemma-2b-fine-tuning.

  2. Model: Select the base model you want to fine-tune from a dropdown list. In this case, google/gemma-1.1-2b-it was chosen.

  3. Resource Profile: Choose the computational resources required for the fine-tuning from one of the available AWS instances. For example, g5.2xlarge: 1x NVIDIA A10G, AMD, 8 vCPUs, 32 GB RAM.

  4. Description: Optionally, add a description to provide context for the job.

  5. Hyperparameters (you can optionally adjust these):

    • No. of epochs: Set the number of training epochs, e.g., 10.

    • Batch size: Define the batch size, e.g., 16.

    • Learning rate: Enter the learning rate, e.g., 0.0001.

    • LoRA rank: Configure LoRA rank, e.g., 16.

  6. After configuring these settings, click Next to proceed to the next step.

Step 4: Upload Training Data

  1. In the Training data section, upload a JSONL file with training data, which includes input prompts and expected outputs.

  2. A sample JSONL file link is provided for guidance on the required format and structure.

  3. Drag and drop the JSONL file or click to upload. The file will be displayed with its original name after a successful upload.

Step 5: Initiate Fine-Tuning

  1. Once the training data is uploaded, click the Create button to start the fine-tuning job.

  2. You’ll be redirected back to the Jobs tab, where the new fine-tuning job will appear in the list with its current Status (e.g., Running), the Started By user, and the Start Time.

Step 6: Monitor Fine-Tuning Job

  1. You can monitor the status of your fine-tuning job on this page.

  2. The job’s status will update as it progresses, eventually showing completion or other results, depending on the outcome (such as Running, Completed or Failed)

Step 7: Review and Deploy

  1. Once the fine-tuning job is complete, you can review the results and decide whether to deploy the fine-tuned model for inference.

  2. The fine-tuned model will be available for deployment in the Adapters section.

Fine-Tuning Page on Dynamiq
Configuring the Fine-Tuning Job
Fine-tuning dataset selection
Successful loading of the dataset for fine-tuning
Monitoring the fine-tuning job status