Basics
Last updated
Last updated
The fine-tuning process using Dynamiq involves the following steps:
In the main dashboard, navigate to the Fine-tuning tab. This section contains options for both Adapters and Jobs.
Open the Jobs tab, where you can view a list of all existing fine-tuning jobs. If this is your first time in this section, no jobs will be listed, and you'll see a message prompting you to create a fine-tuning job.
Click on the + Create a fine-tuning job button to initiate a new job.
This will open a sidebar or pop-up window labeled Add new fine-tuned model, where you can specify the details of your fine-tuning job.
Name: Enter a name for the fine-tuning job. For example, gemma-2b-fine-tuning
.
Model: Select the base model you want to fine-tune from a dropdown list. In this case, google/gemma-1.1-2b-it
was chosen.
Resource Profile: Choose the computational resources required for the fine-tuning from one of the available AWS instances. For example, g5.2xlarge: 1x NVIDIA A10G, AMD, 8 vCPUs, 32 GB RAM
.
Description: Optionally, add a description to provide context for the job.
Hyperparameters (you can optionally adjust these):
No. of epochs: Set the number of training epochs, e.g., 10.
Batch size: Define the batch size, e.g., 16.
Learning rate: Enter the learning rate, e.g., 0.0001.
LoRA rank: Configure LoRA rank, e.g., 16.
After configuring these settings, click Next to proceed to the next step.
In the Training data section, upload a JSONL file with training data, which includes input prompts and expected outputs.
A sample JSONL file link is provided for guidance on the required format and structure.
Drag and drop the JSONL file or click to upload. The file will be displayed with its original name after a successful upload.
Once the training data is uploaded, click the Create button to start the fine-tuning job.
You’ll be redirected back to the Jobs tab, where the new fine-tuning job will appear in the list with its current Status (e.g., Running
), the Started By user, and the Start Time.
You can monitor the status of your fine-tuning job on this page.
The job’s status will update as it progresses, eventually showing completion or other results, depending on the outcome (such as Running
, Completed
or Failed
)
Once the fine-tuning job is complete, you can review the results and decide whether to deploy the fine-tuned model for inference.
The fine-tuned model will be available for deployment in the Adapters section.