Fine-tuned Adapters
Deploying Adapters
Apart from the deployments of base LLMs, Dynamiq also enables seamless deployment of fine-tuned adapters, allowing you to customize and optimize large language models for specific tasks. Each deployment includes a dedicated subpage listing all the fine-tuned adapters available for a particular model, simplifying integration and usage. All the fine-tuned LoRA layers are loaded dynamically, ensuring fast and accurate responses to your requests.
Fine-tuned Adapters Tab
When you deploy an LLM on Dynamiq, a subpage called ADAPTERS is automatically created for that deployment. This subpage contains:
A list of fine-tuned adapters associated with the model.
Details about each adapter, including its alias (a name that can be used to query the adapter in your request), the creator of the adapter, and the date of creation.
Using Fine-tuned Adapters
To use a fine-tuned adapter, modify the model parameter in your API request to reference the specific adapter's name. You should follow the standard format dynamiq/adapters/{adapter-alias} for each of the adapters you want to use.
For example, if the fine-tuned adapter alias is mistral-lora-test-v2v3e7op (as it is the screenshot below), then, to use this adapter, you would set the model the parameter in your API request to dynamiq/adapters/mistral-lora-test-v2v3e7op.

Code Example
Here's an example of querying the mistralai/Mistral-7B-Instruct-v0.3 model without the adapter:
Generated response:
For comparison, here's an example of querying the mistral-lora-test-v2v3e7op adapter trained on the sample dataset for fine-tuning with the same model:
Generated response:
As you can see, the response generated by the adapter is more focused and relevant to the input prompt and the dataset used for fine-tuning, showcasing the benefits of using fine-tuned adapters for specific tasks.
Last updated