Dynamiq Docs
  • Welcome to Dynamiq
  • Low-Code Builder
    • Chat
    • Basics
    • Connecting Nodes
    • Conditional Nodes and Multiple Outputs
    • Input and Output Transformers
    • Error Handling and Retries
    • LLM Nodes
    • Validator Nodes
    • RAG Nodes
      • Indexing Workflow
        • Pre-processing Nodes
        • Document Splitting
        • Document Embedders
        • Document Writers
      • Inference RAG workflow
        • Text embedders
        • Document retrievers
          • Complex retrievers
        • LLM Answer Generators
    • LLM Agents
      • Basics
      • Guide to Implementing LLM Agents: ReAct and Simple Agents
      • Guide to Agent Orchestration: Linear and Adaptive Orchestrators
      • Guide to Advanced Agent Orchestration: Graph Orchestrator
    • Audio and voice
    • Tools and External Integrations
    • Python Code in Workflows
    • Memory
    • Guardrails
  • Deployments
    • Workflows
      • Tracing Workflow Execution
    • LLMs
      • Fine-tuned Adapters
      • Supported Models
    • Vector Databases
  • Prompts
    • Prompt Playground
  • Connections
  • LLM Fine-tuning
    • Basics
    • Using Adapters
    • Preparing Data
    • Supported Models
    • Parameters Guide
  • Knowledge Bases
  • Evaluations
    • Metrics
      • LLM-as-a-Judge
      • Predefined metrics
        • Faithfulness
        • Context Precision
        • Context Recall
        • Factual Correctness
        • Answer Correctness
      • Python Code Metrics
    • Datasets
    • Evaluation Runs
    • Examples
      • Build Accurate vs. Inaccurate Workflows
  • Examples
    • Building a Search Assistant
      • Approach 1: Single Agent with a Defined Role
      • Approach 2: Adaptive Orchestrator with Multiple Agents
      • Approach 3: Custom Logic Pipeline with a Straightforward Workflow
    • Building a Code Assistant
  • Platform Settings
    • Access Keys
    • Organizations
    • Settings
    • Billing
  • On-premise Deployment
    • AWS
    • IBM
  • Support Center
Powered by GitBook
On this page
  • Why Document Vectorization is Important
  • Document Embedders
  • Key Features of the Document Embedder
  • How to Use the Document Vectorizer
  • Benefits of Document Vectorization
  1. Low-Code Builder
  2. RAG Nodes
  3. Indexing Workflow

Document Embedders

PreviousDocument SplittingNextDocument Writers

Last updated 6 months ago

Why Document Vectorization is Important

Document vectorization is a crucial step in the indexing workflow, transforming text data into numerical vector representations. These vectors enable efficient similarity searches, allowing the RAG application to match user queries with relevant documents based on vector proximity. This process enhances the retrieval accuracy and speed, making it a vital component of the RAG system.

Document Embedders

Several document embedders are available, each offering unique capabilities for vectorizing text data. These embedders convert text into high-dimensional vectors, capturing semantic meanings and relationships.

Available Embedders

Key Features of the Document Embedder

  • Model Selection: Choose from various models, such as text-embedding-3-small, to suit your specific needs.

  • Dimensions: Specify the dimensionality of the vectors, which affects the granularity and detail of the representation.

  • Enable Caching: Option to cache embeddings for faster retrieval and reduced computational load.

How to Use the Document Vectorizer

1. Input

Provide the split documents from the previous chunking step. The vectorizer will process these documents to generate vector embeddings.

2. Configuration

Select the appropriate embedder and model based on your requirements. Configure the dimensions to balance between detail and computational efficiency.

3. Output

The vectorizer outputs the vectorized documents, ready for storage and retrieval. These vectors are used to perform similarity searches during the inference phase.

Benefits of Document Vectorization

  • Efficient Retrieval: Vectors enable quick similarity searches, improving the speed of information retrieval.

  • Enhanced Accuracy: Captures semantic relationships, ensuring that retrieved documents are contextually relevant.

  • Scalability: Handles large datasets efficiently, making it suitable for extensive knowledge bases.

By effectively utilizing document embedders, you can optimize your data for retrieval, ensuring that your RAG application delivers precise and contextually relevant information.

In the next section, we will explore the storage process, detailing how to save vectorized data for efficient retrieval during the inference phase.