Dynamiq Docs
  • Welcome to Dynamiq
  • Low-Code Builder
    • Chat
    • Basics
    • Connecting Nodes
    • Conditional Nodes and Multiple Outputs
    • Input and Output Transformers
    • Error Handling and Retries
    • LLM Nodes
    • Validator Nodes
    • RAG Nodes
      • Indexing Workflow
        • Pre-processing Nodes
        • Document Splitting
        • Document Embedders
        • Document Writers
      • Inference RAG workflow
        • Text embedders
        • Document retrievers
          • Complex retrievers
        • LLM Answer Generators
    • LLM Agents
      • Basics
      • Guide to Implementing LLM Agents: ReAct and Simple Agents
      • Guide to Agent Orchestration: Linear and Adaptive Orchestrators
      • Guide to Advanced Agent Orchestration: Graph Orchestrator
    • Audio and voice
    • Tools and External Integrations
    • Python Code in Workflows
    • Memory
    • Guardrails
  • Deployments
    • Workflows
      • Tracing Workflow Execution
    • LLMs
      • Fine-tuned Adapters
      • Supported Models
    • Vector Databases
  • Prompts
    • Prompt Playground
  • Connections
  • LLM Fine-tuning
    • Basics
    • Using Adapters
    • Preparing Data
    • Supported Models
    • Parameters Guide
  • Knowledge Bases
  • Evaluations
    • Metrics
      • LLM-as-a-Judge
      • Predefined metrics
        • Faithfulness
        • Context Precision
        • Context Recall
        • Factual Correctness
        • Answer Correctness
      • Python Code Metrics
    • Datasets
    • Evaluation Runs
    • Examples
      • Build Accurate vs. Inaccurate Workflows
  • Examples
    • Building a Search Assistant
      • Approach 1: Single Agent with a Defined Role
      • Approach 2: Adaptive Orchestrator with Multiple Agents
      • Approach 3: Custom Logic Pipeline with a Straightforward Workflow
    • Building a Code Assistant
  • Platform Settings
    • Access Keys
    • Organizations
    • Settings
    • Billing
  • On-premise Deployment
    • AWS
    • IBM
  • Support Center
Powered by GitBook
On this page

Welcome to Dynamiq

NextLow-Code Builder

Last updated 5 months ago

Welcome to Dynamiq 👋 your end-to-end solution for building and deploying compliant GenAI applications within your own infrastructure. We're very excited that you're here! 🤗

Our platform is designed to streamline the entire AI development lifecycle, providing you with the tools and features needed to rapidly prototype, test, deploy, and fine-tune models - all while ensuring complete control over your data and compliance with regulatory requirements.

Why Dynamiq?

In today's fast-paced world, developing AI-driven applications requires a robust, scalable, and secure environment. Dynamiq enables organizations to build and manage agentic and GenAI applications effortlessly, leveraging our comprehensive suite of features:

  • Low-Code AI Workflow Builder: Automate complex tasks and build AI workflows with ease using our intuitive low-code builder. Whether you need to integrate pre-built modules or extend functionality with custom Python code, our platform has you covered.

  • Centralized Data Management with Knowledge & RAG: Easily deploy Dynamiq in the environment that fits your needs - on-premise, hybrid-cloud, or native-cloud. Maintain full control with on-premise options, combine on-site security with cloud scalability in a hybrid setup, or integrate seamlessly with providers like AWS, Azure, and GCP for a fully cloud-native experience. Dynamiq’s flexible deployment options empower your organization to scale GenAI applications securely and efficiently, no matter your infrastructure.

  • Flexible Deployment: Dynamiq offers flexible deployment options, including on-premise, hybrid-cloud, and cloud-native, to meet varied security, scalability, and compliance needs. On-premise deployment provides full data control, hybrid-cloud balances on-site security with cloud scalability, and cloud-native integrates easily with providers like AWS, Azure, GCP, OCI and IBM Cloud for rapid deployment. These choices allow organizations to scale GenAI applications securely and efficiently.

  • Efficient AI Workflow Deployment: Deploy AI workflows, LLMs, vector databases, and Docker-based services quickly and reliably. Our platform is optimized for peak performance, allowing you to focus on innovation while we handle the complexities of deployment.

  • LLM Fine-Tuning and Data Ownership: Fine-tune your models with ease and retain full ownership of your data. Our platform supports efficient fine-tuning processes, ensuring that your customized models are ready for deployment in record time, all while keeping your data secure and proprietary.