# Basics

## The Fine-Tuning Process

The fine-tuning process using Dynamiq involves the following steps:

### Step 1: Access the Fine-Tuning Section

<figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2FxpvZIE0yRhJYRyqvG3tH%2Ffine_tuning_page.png?alt=media&#x26;token=df0b202b-a9f6-4623-8e3c-db4f2097998b" alt=""><figcaption><p>Fine-Tuning Page on Dynamiq</p></figcaption></figure>

1. In the main dashboard, navigate to the **Fine-tuning** tab. This section contains options for both **Adapters** and **Jobs**.
2. Open the **Jobs** tab, where you can view a list of all existing fine-tuning jobs. If this is your first time in this section, no jobs will be listed, and you'll see a message prompting you to create a fine-tuning job.

### Step 2: Create a New Fine-Tuning Job

1. Click on the **+ Create a fine-tuning job** button to initiate a new job.
2. This will open a sidebar or pop-up window labeled **Add new fine-tuned model**, where you can specify the details of your fine-tuning job.

### Step 3: Configure the Fine-Tuning Job Details

<figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2FRjqsSOLEpXDBLpEno25B%2Ffine_tuning_selecting_params.png?alt=media&#x26;token=9839e417-e0a9-48dd-9842-9df09222899f" alt=""><figcaption><p>Configuring the Fine-Tuning Job</p></figcaption></figure>

1. **Name**: Enter a name for the fine-tuning job. For example, `gemma-2b-fine-tuning`.
2. **Model**: Select the base model you want to fine-tune from a dropdown list. In this case, `google/gemma-1.1-2b-it` was chosen.
3. **Resource Profile**: Choose the computational resources required for the fine-tuning from one of the available AWS instances. For example, `g5.2xlarge: 1x NVIDIA A10G, AMD, 8 vCPUs, 32 GB RAM`.
4. **Description**: Optionally, add a description to provide context for the job.
5. **Hyperparameters** (you can optionally adjust these):
   * **No. of epochs**: Set the number of training epochs, e.g., 10.
   * **Batch size**: Define the batch size, e.g., 16.
   * **Learning rate**: Enter the learning rate, e.g., 0.0001.
   * **LoRA rank**: Configure LoRA rank, e.g., 16.
6. After configuring these settings, click **Next** to proceed to the next step.

### Step 4: Upload Training Data

<figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2Fcyp2piUHcojmbLGFjn5g%2Ffine_tuning_dataset.png?alt=media&#x26;token=c042893a-1f3f-46ef-8c07-33fff6108b56" alt=""><figcaption><p>Fine-tuning dataset selection</p></figcaption></figure>

<figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2FzEmaPVhnPhXjUl6YjBCs%2Ffine_tuning_dataset_selection.png?alt=media&#x26;token=3fe0d2e6-0a23-4498-af9a-6b5a29bf1823" alt=""><figcaption><p>Successful loading of the dataset for fine-tuning</p></figcaption></figure>

1. In the **Training data** section, upload a JSONL file with training data, which includes input prompts and expected outputs.
2. A sample JSONL file link is provided for guidance on the required format and structure.
3. Drag and drop the JSONL file or click to upload. The file will be displayed with its original name after a successful upload.

### Step 5: Initiate Fine-Tuning

1. Once the training data is uploaded, click the **Create** button to start the fine-tuning job.
2. You’ll be redirected back to the **Jobs** tab, where the new fine-tuning job will appear in the list with its current **Status** (e.g., `Running`), the **Started By** user, and the **Start Time**.

### Step 6: Monitor Fine-Tuning Job

<figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2FOqNZ5vrSRx8iUR6RvnWT%2Ffine_tuning_job_running.png?alt=media&#x26;token=f8073044-1aed-4ee0-b7f3-3d19c58d93d8" alt=""><figcaption><p>Monitoring the fine-tuning job status</p></figcaption></figure>

1. You can monitor the status of your fine-tuning job on this page.
2. The job’s status will update as it progresses, eventually showing completion or other results, depending on the outcome (such as `Running`, `Completed` or `Failed`)

### Step 7: Review and Deploy

1. Once the fine-tuning job is complete, you can review the results and decide whether to deploy the fine-tuned model for inference.
2. The fine-tuned model will be available for deployment in the **Adapters** section.
