Dynamiq Docs
  • Welcome to Dynamiq
  • Low-Code Builder
    • Extractors and Transformers
    • Chat
    • Basics
    • Connecting Nodes
    • Conditional Nodes and Multiple Outputs
    • Input and Output Transformers
    • Error Handling and Retries
    • LLM Nodes
    • Validator Nodes
    • RAG Nodes
      • Indexing Workflow
        • Pre-processing Nodes
        • Document Splitting
        • Document Embedders
        • Document Writers
      • Inference RAG workflow
        • Text embedders
        • Document retrievers
          • Complex retrievers
        • LLM Answer Generators
    • LLM Agents
      • Basics
      • Guide to Implementing LLM Agents: ReAct and Simple Agents
      • Guide to Agent Orchestration: Linear and Adaptive Orchestrators
      • Guide to Advanced Agent Orchestration: Graph Orchestrator
    • Audio and voice
    • Tools and External Integrations
    • Python Code in Workflows
    • Memory
    • Guardrails
  • Deployments
    • Workflows
      • Tracing Workflow Execution
    • LLMs
      • Fine-tuned Adapters
      • Supported Models
    • Vector Databases
  • Prompts
    • Prompt Playground
  • Connections
  • LLM Fine-tuning
    • Basics
    • Using Adapters
    • Preparing Data
    • Supported Models
    • Parameters Guide
  • Knowledge Bases
  • Evaluations
    • Metrics
      • LLM-as-a-Judge
      • Predefined metrics
        • Faithfulness
        • Context Precision
        • Context Recall
        • Factual Correctness
        • Answer Correctness
      • Python Code Metrics
    • Datasets
    • Evaluation Runs
    • Examples
      • Build Accurate vs. Inaccurate Workflows
  • Examples
    • Building a Search Assistant
      • Approach 1: Single Agent with a Defined Role
      • Approach 2: Adaptive Orchestrator with Multiple Agents
      • Approach 3: Custom Logic Pipeline with a Straightforward Workflow
    • Building a Code Assistant
  • Platform Settings
    • Access Keys
    • Organizations
    • Settings
    • Billing
  • On-premise Deployment
    • AWS
    • IBM
    • Red Hat OpenShift
  • Support Center
Powered by GitBook
On this page
  • Getting Started: Building the Workflow
  • Step 1: Creating the React Agent
  • Step 2: Deploying and Testing the Workflow
  1. Examples

Building a Code Assistant

PreviousApproach 3: Custom Logic Pipeline with a Straightforward WorkflowNextPlatform Settings

Last updated 7 months ago

In this guide, we’ll walk through building a powerful ReAct agent designed to handle various coding tasks, including code writing, execution, data analysis, and web-based activities. With the help of the E2B ( code execution ) tool and a set of specialized integrations, this agent can tackle file management, internet searches, model training, and data analytics. This article will explore each step of the workflow, from creating and configuring the agent to deploying and testing it in real-world scenarios.

For code execution tasks, using a model with advanced programming comprehension is critical. Claude 3.5 Sonnet and GPT-4o are recommended for their higher-level understanding of programming concepts, which leads to better code generation and efficient problem-solving. These models are well-suited for complex coding environments where accuracy and sophistication are essential.

The code execution feature includes built-in error handling. If an exception is raised during code execution, the agent receives the error message and can attempt to correct the code or suggest alternative solutions.

Getting Started: Building the Workflow

To create a coding agent in React, we'll start by setting up a structured workflow. The essential steps are as follows:

  1. Set up the ReAct agent — the backbone of your automation.

  2. Integrate E2B as a tool for code execution, providing a foundation for the agent's coding tasks.

Let's go through these steps in more detail.


Step 1: Creating the React Agent

Before diving into integrations, we'll need to start by building a React-based agent. This setup will serve as the base framework, giving your agent a dedicated environment to operate in.

The E2B tool is essential for enabling code-writing and execution capabilities in your agent. By integrating E2B, the agent can handle coding tasks directly, making it a valuable asset for development-related workflows.

Defining the agent’s role helps clarify its tasks and purpose. A well-structured prompt can help guide the agent’s actions, especially when it has multiple tools and functionalities.

Example Prompt:

A helpful AI assistant skilled in language, Python programming, and Linux commands. The goal is to provide clear, brief answers to the user. For tasks that require code, first outline a plan, then write well-structured Python code, check for errors, and run it to confirm it works. Use any free, open-source API that doesn’t need authorization, and install necessary packages for handling specific file types, like PDFs or binary files. When working with binary files, understand the file format before reading them.


Step 2: Deploying and Testing the Workflow

After building and configuring your agent, it’s time to deploy and test it across various scenarios to ensure all tools and functionalities work harmoniously.

  • Example of using external API call for perform search and do some analysis through generated code

Here is an example of an intermediate step.

  • Example of providing some data generation and modelling issues

And final output in proper markdown format.

K-means Clustering Report

Code

import numpy as np
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans

# Generate synthetic data
np.random.seed(42)  # For reproducibility
X = np.random.rand(100, 2)  # 100 points in 2D

# Apply K-means clustering
kmeans = KMeans(n_clusters=3)
kmeans.fit(X)
labels = kmeans.labels_
centers = kmeans.cluster_centers_

# Plot the results
plt.figure(figsize=(8, 6))
plt.scatter(X[:, 0], X[:, 1], c=labels, cmap='viridis', marker='o', edgecolor='k')
plt.scatter(centers[:, 0], centers[:, 1], c='red', s=200, alpha=0.75, marker='X')
plt.title('K-means Clustering')
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')
plt.grid(True)
plt.show()

Results

  • Labels: The clustering labels for each of the 100 data points are as follows:

[0, 2, 0, 0, 2, 0, 1, 0, 0, 1, 1, 0, 2, 0, 1, 1, 0, 2, 1, 1, 0, 0, 0, 0, 1, 2, 2, 2, 0, 0, 1, 1, 0, 0, 0, 1, 0, 2, 1, 1, 2, 1, 0, 2, 2, 0, 2, 2, 1, 0, 0, 0, 1, 0, 1, 1, 2, 2, 1, 2, 2, 1, 0, 2, 0, 1, 0, 1, 2, 0, 1, 1, 1, 2, 0, 1, 0, 1, 2, 0, 0, 2, 0, 1, 0, 1, 1, 1, 1, 1, 1, 2, 0, 2, 1, 0, 2, 0, 2, 2]

  • Cluster Centers: The coordinates of the cluster centers are:

    [[0.17284770118135123, 0.59187921496425],
     [0.593866956168038, 0.19927680984408092],
     [0.7685495540478937, 0.7122501507434474]]

Conclusion

The K-means algorithm successfully clustered the data into 3 groups based on the generated features. The visualization shows the distribution of the data points and the identified cluster centers.