Dynamiq Docs
  • Welcome to Dynamiq
  • Low-Code Builder
    • Chat
    • Basics
    • Connecting Nodes
    • Conditional Nodes and Multiple Outputs
    • Input and Output Transformers
    • Error Handling and Retries
    • LLM Nodes
    • Validator Nodes
    • RAG Nodes
      • Indexing Workflow
        • Pre-processing Nodes
        • Document Splitting
        • Document Embedders
        • Document Writers
      • Inference RAG workflow
        • Text embedders
        • Document retrievers
          • Complex retrievers
        • LLM Answer Generators
    • LLM Agents
      • Basics
      • Guide to Implementing LLM Agents: ReAct and Simple Agents
      • Guide to Agent Orchestration: Linear and Adaptive Orchestrators
      • Guide to Advanced Agent Orchestration: Graph Orchestrator
    • Audio and voice
    • Tools and External Integrations
    • Python Code in Workflows
    • Memory
    • Guardrails
  • Deployments
    • Workflows
      • Tracing Workflow Execution
    • LLMs
      • Fine-tuned Adapters
      • Supported Models
    • Vector Databases
  • Prompts
    • Prompt Playground
  • Connections
  • LLM Fine-tuning
    • Basics
    • Using Adapters
    • Preparing Data
    • Supported Models
    • Parameters Guide
  • Knowledge Bases
  • Evaluations
    • Metrics
      • LLM-as-a-Judge
      • Predefined metrics
        • Faithfulness
        • Context Precision
        • Context Recall
        • Factual Correctness
        • Answer Correctness
      • Python Code Metrics
    • Datasets
    • Evaluation Runs
    • Examples
      • Build Accurate vs. Inaccurate Workflows
  • Examples
    • Building a Search Assistant
      • Approach 1: Single Agent with a Defined Role
      • Approach 2: Adaptive Orchestrator with Multiple Agents
      • Approach 3: Custom Logic Pipeline with a Straightforward Workflow
    • Building a Code Assistant
  • Platform Settings
    • Access Keys
    • Organizations
    • Settings
    • Billing
  • On-premise Deployment
    • AWS
    • IBM
  • Support Center
Powered by GitBook
On this page
  • Fine-Tuning Parameters Guide
  • Learning Rate
  • Batch Size
  • Epochs
  • LoRA Rank
  1. LLM Fine-tuning

Parameters Guide

Fine-Tuning Parameters Guide

Adjusting the fine-tuning configuration parameters allows you to customize the training process to better suit your data and use case. Here's a list of the supported parameters and their impact on the fine-tuning process:

Learning Rate

  • Determines how quickly the model learns.

  • Lower rates are ideal for minor adjustments, while higher rates speed up learning but risk overshooting.

  • Default value: 0.0001

Batch Size

  • Specifies how much data is processed simultaneously.

  • Smaller batch sizes can improve accuracy but take longer to train.

  • Default value: 16

Epochs

  • Indicates how many times the model will iterate through the entire dataset.

  • More epochs can improve accuracy but increase computation time.

  • Default value: 10 epochs

LoRA Rank

  • The rank of the low-rank adaptation matrix.

  • Higher ranks can capture more information but require more memory and computation.

  • Default value: 16

PreviousSupported ModelsNextKnowledge Bases

Last updated 6 months ago