Tools and External Integrations
In our workflow builder, there are several tool nodes specifically designed to handle tasks like web scraping, online searching, and making HTTP requests. These nodes offer flexibility in integrating external data sources, enabling workflows to retrieve, process, and utilize information from the web seamlessly.
Tavily (seach)

The Tavily tool node is a specialized tool for web search, powered by the Tavily search service. This node allows you to execute search queries with customizable parameters, retrieve relevant web content, and integrate the results directly into your workflows.
Configuration
Name: Customizable name for identifying this node.
Connection: The connection configuration for Tavily.
Description: Short description of the node’s functionality.
Optimized for Agents: When enabled, formats the output specifically for AI agents, structuring the results in a format that agents can easily parse, interpret, and display.
Input
query: A
string
containing the search query to execute.
Output
Content:
Standard (if optimized for Agents disabled) -
dict
output with the following fields:result: Formatted string output of the search results, including source URL, title, content, and relevance score.
sources_with_url: List of sources with URLs in a clickable format, useful for quickly accessing original pages.
raw_response: Full raw JSON response from Tavily, useful for advanced data processing.
images: List of image URLs.
answer: Short answer generated by Tavily for the search query (if available).
query: Echo of the original search query.
response_time: The time taken for Tavily to respond to the query.
Example:
{ "result": "Formatted search results", "sources_with_url": ["[AI News](https://example.com/article1)", "[Tech Updates](https://example.com/article2)"], "raw_response": { ... }, "images": ["https://example.com/image1.jpg", "https://example.com/image2.jpg"], "answer": "Artificial Intelligence continues to evolve rapidly, with key developments in machine learning and natural language processing.", "query": "latest advancements in AI technology", "response_time": 0.23 }
Optimized for Agents - formatted
string
Example:
<Sources with URLs> [AI News](https://example.com/article1) [Tech Updates](https://example.com/article2) <\Sources with URLs> <Search results for query latest advancements in AI technology> Source: https://example.com/article1 Title: AI News Content: Details about AI advancements... Relevance Score: 0.95 ... <\Search results for query latest advancements in AI technology> <Answer> Artificial Intelligence continues to evolve rapidly, with key developments in machine learning and natural language processing. <\Answer>
Connection Configuration

Type: Tavily
Name: Customizable name for identifying this connection.
API key: Your API key
Usage Example

Add an Input node and specify query.
Drag a Tavily node into the workspace and connect it to the Input node. Set the desired configuration.
Attach a downstream node (e.g. Output) to handle the search content.
Jina (search)

The Jina Searcher tool node allows users to perform web searches using the Jina AI API, with support for semantic and neural search.
It accepts a query and optional parameters, returning relevant results based on content similarity. Ideal for workflows needing smarter search beyond keyword matching.
Configuration
Name: Customizable name for identifying this node.
Connection: The connection configuration for Jina.
Description (for Agents): Short description of the node’s functionality.
Query: A
string
containing the search query. This can also be dynamically provided in the Input.Max results: Maximum number of results to return.
Include images: A Boolean value that marks whether image results should be included.
Include full content: A Boolean value that marks whether to return the full content or just a summary/snippet.
Input
query: A
string
containing the search query. If it's missing, the query from the configuration will be used.
Output
Content:
Standard -
dict
output with the following fields:result: Formatted string output of the search results, including title, URL, and content description.
sources_with_url: List of sources in [Title](URL) format for quick access to original pages.
images: dictionary with name and url for all images.
raw_response: Full raw JSON response from Jina AI, useful for advanced data analysis and processing.
Example:
{ "result": "Title: Tech News Today\nLink: https://example.com/article1\nSnippet: Highlights of today's tech news...\n\nTitle: Innovation Daily\nLink: https://example.com/article2\nSnippet: Key developments in the tech industry...\n\nTitle: Future Insights\nLink: https://example.com/article3\nSnippet: Upcoming tech trends...", "sources_with_url": [ "[Tech News Today](https://example.com/article1)", "[Innovation Daily](https://example.com/article2)", "[Future Insights](https://example.com/article3)" ], "images": {}, "raw_response": {...} }
Optimized for Agents - formatted
string
Example:
<Sources with URLs> [Tech News Today](https://example.com/article1) [Innovation Daily](https://example.com/article2) <\Sources with URLs> <Search results for query tech news> Title: Tech News Today Link: https://example.com/article1 Description: Highlights of today's tech news... Title: Innovation Daily Link: https://example.com/article2 Description: Key developments in the tech industry... <\Search results>
Connection Configuration

Type: Jina
Name: Customizable name for identifying this connection.
API key: Your API key
ScaleSerp (search)

The ScaleSerp tool node enables users to perform web searches using the Scale SERP API. It is designed to retrieve search results across various search types, such as organic web search, news, images, and videos. This tool is ideal for workflows that need dynamic web search capabilities and flexible data retrieval.
Configuration
Name: Customizable name for identifying this node.
Connection: The connection configuration for ScaleSerp.
Description: Short description of the node’s functionality.
Optimized for Agents: When enabled, formats the output specifically for AI agents, structuring the results in a format that agents can easily parse, interpret, and display.
Input
query: A
string
containing the search query to execute.
Output
Content:
Standard (if Optimized for Agents disabled) -
dict
output with the following fields:result: Formatted string output of the search results, including title, URL, and content snippets.
sources_with_url: List of sources in [Title: (URL)] format for quick access to original pages.
urls: List of URLs of the search results.
raw_response: Full raw JSON response from Scale SERP, useful for advanced data analysis and processing.
Example:
{ "result": "Title: Tech News Today\nLink: https://example.com/article1\nSnippet: Highlights of today's tech news...\n\nTitle: Innovation Daily\nLink: https://example.com/article2\nSnippet: Key developments in the tech industry...\n\nTitle: Future Insights\nLink: https://example.com/article3\nSnippet: Upcoming tech trends...", "sources_with_url": [ "Tech News Today: (https://example.com/article1)", "Innovation Daily: (https://example.com/article2)", "Future Insights: (https://example.com/article3)" ], "urls": [ "https://example.com/article1", "https://example.com/article2", "https://example.com/article3" ], "raw_response": {...} }
Optimized for Agents - formatted
string
Example:
<Sources with URLs> [Tech News Today: (https://example.com/article1)] [Innovation Daily: (https://example.com/article2)] <\Sources with URLs> <Search results> Title: Tech News Today Link: https://example.com/article1 Snippet: Highlights of today's tech news... Title: Innovation Daily Link: https://example.com/article2 Snippet: Key developments in the tech industry... <\Search results>
Connection Configuration

Type: ScaleSerp
Name: Customizable name for identifying this connection.
API key: Your API key
Usage Example

Add an Input node and specify query in input field.
Drag a ScaleSerp node into the workspace and connect it to the Input node. Set the desired configuration.
Attach a downstream node (e.g. Output) to handle the search content.
ZenRows (scraping)

The ZenRows tool node provides a powerful solution for web scraping, allowing users to extract content from web pages using the ZenRows service and is ideal for integrating scraped content directly into workflows for further processing or analysis.
Configuration
Name: Customizable name for identifying this node.
Connection: The connection configuration for Tavily.
Description: Short description of the node’s functionality.
URL: The URL of the page to scrape. This also can be dynamically provided in the Input.
Optimized for Agents: When enabled, formats the output specifically for AI agents, structuring the results in a format that agents can easily parse, interpret, and display.
Input
URL: A
string
containing the URL of the page to scrape. If it missed, try to use URL from the configuration.
Output
Content:
Standard (if Optimized for Agents disabled) -
dict
output with the following fields:url: The URL of the scraped page.
content: The main content of the page in Markdown format.
Example:
{ "url": "https://example.com/article", "content": "# Article Title\n\nThis is the introductory paragraph of the article...\n\n## Key Points\n\n- Point 1: Description of the first point.\n- Point 2: Description of the second point.\n\nConclusion with some final remarks." }
Optimized for Agents - formatted
string
Example:
<Source URL> https://example.com/article <\Source URL> <Scraped result> # Article Title This is the introductory paragraph of the article... ## Key Points - Point 1: Description of the first point. - Point 2: Description of the second point. - Conclusion with some final remarks. <\Scraped result>
Connection Configuration

Type: ZenRows
Name: Customizable name for identifying this connection.
API key: Your API key
Usage Example

Workflow with url specified in Input node
Add an Input node and specify url.
Drag a ZenRows node into the workspace and connect it to the Input node. Set the desired configuration.
Attach a downstream node (e.g. Output) to handle the search content.

Workflow with url specified in ZenRows node
Add an Input node.
Drag a ZenRows node into the workspace and connect it to the Input node. Set the desired configuration and URL.
Attach a downstream node (e.g. Output) to handle the search content.
Jina (scraping)

The Jina Scraper tool node enables users to extract content from web pages using the Jina AI API, supporting both structured and unstructured data extraction. Ideal for workflows that require automated ingestion of web content for downstream processing like summarization, embedding, or indexing.
Configuration
Name: Customizable name for identifying this node.
Connection: The connection configuration for Jina.
Description (for Agents): Short description of the node’s functionality.
URL: A
string
containing the url to scrape. This can also be dynamically provided in the Input.Response format:
Default: The default pipeline optimized for most websites and suitable for LLM input. It applies readability filtering to extract clean, relevant content.
Markdown: Extracts and returns the page content in Markdown format, bypassing readability filtering for raw HTML-to-Markdown conversion.
HTML : Returns the full raw HTML content of the page using
document.documentElement.outerHTML
.Text: Returns plain text extracted from the page using
document.body.innerText
, ideal for simple text-based processing.Screenshot: Captures and returns the image URL of the visible portion of the page (i.e., the first screen).
Pageshot: Captures and returns the image URL of a full-page screenshot (best-effort rendering of the entire scrollable content).
Input
URL: A
string
containing the URL of the page to scrape. If it`s missing, the URL from the configuration will be used.
Output
Content:
Standard -
dict
output with the following fields:url: The URL of the scraped page.
content: The main content of the page, in the format chosen in the configuration.
Example:
{ "url": "https://example.com/article", "content": "# Article Title\n\nThis is the introductory paragraph of the article...\n\n## Key Points\n\n- Point 1: Description of the first point.\n- Point 2: Description of the second point.\n\nConclusion with some final remarks." }
Optimized for Agents - formatted
string
Example:
<Source URL> https://example.com/article <\Source URL> <Scraped result> # Article Title This is the introductory paragraph of the article... ## Key Points - Point 1: Description of the first point. - Point 2: Description of the second point. - Conclusion with some final remarks. <\Scraped result>
Connection Configuration

Type: Jina
Name: Customizable name for identifying this connection.
API key: Your API key
FireCrawl (scraping)

The FireCrawl tool node is a web scraping tool powered by FireCrawl. It enables users to extract structured content from web pages. This tool is ideal for workflows requiring flexible web data extraction capabilities.
Configuration
Name: Customizable name for identifying this node.
Connection: The connection configuration for Tavily.
Description: Short description of the node’s functionality.
URL: The URL of the page to scrape. This also can be dynamically provided in the Input.
Optimized for Agents: When enabled, formats the output specifically for AI agents, structuring the results in a format that agents can easily parse, interpret, and display.
Input
URL: A
string
containing the URL of the page to scrape. If it missed, try to use URL from the configuration.
Output
Content:
Standard (if Optimized for Agents disabled) -
dict
output with the following fields:success: Boolean indicating whether the scraping was successful.
url: The URL of the scraped page.
markdown: Content in Markdown format (if applicable).
content: Main text content of the page.
html: Full HTML content if Include HTML was enabled.
raw_html: Raw HTML if extracted without additional processing.
metadata: Metadata information extracted from the page.
llm_extraction: Extracted content optimized for language model processing.
warning: Any warnings or messages returned during the scraping process.
Example:
{ "success": true, "url": "https://example.com/news", "content": "Full article content in plain text", "llm_extraction": { "summary": "This news article covers recent developments in tech.", "key_points": [ "AI advancements in 2023.", "Growth in renewable energy adoption.", "Cybersecurity challenges in modern industries." ] }, "html": "<html><body>...</body></html>", "markdown": "", "metadata": {}, "warning": null }
Optimized for Agents - formatted
string
Example:
<Source URL> https://example.com/news <\Source URL> <LLM Extraction> Summary: This news article covers recent developments in tech. Key Points: - AI advancements in 2023. - Growth in renewable energy adoption. - Cybersecurity challenges in modern industries. <\LLM Extraction>
Connection Configuration

Type: FireCrawl
Name: Customizable name for identifying this connection.
API key: Your API key.
Usage Example

Workflow with url specified in Input node:
Add an Input node and specify url.
Drag a FireCrawl node into the workspace and connect it to the Input node. Set the desired configuration.
Attach a downstream node (e.g. Output) to handle the search content.

Workflow with url specified in FireCrawl node:
Add an Input node.
Drag a FireCrawl node into the workspace and connect it to the Input node. Set the desired configuration and URL.
Attach a downstream node (e.g. Output) to handle the search content.
HTTP API Call

The HTTP API Call node is a versatile tool for making HTTP API requests. This node allows for dynamic configuration of request parameters, headers, and data payload, making it suitable for interacting with various APIs. The node is flexible and can handle responses in JSON, text, or raw data format.
Configuration
Name: Customizable name for identifying this node.
Connection: Connection to the HTTP service, specifying the base URL and HTTP method (GET, POST, etc.).
Description: Short description of the node’s functionality.
Response Type: Specifies the expected response format, with options:
json: Parses the response as JSON and returns it as a
dict
.text: Returns the response as a plain text
string
.raw: Returns the raw binary response (
bytes
), useful for non-text responses (e.g., images).
Timeout: Sets the maximum time in seconds the node will wait for a response, with a default of 30 seconds.
Url: Endpoint URL for the new request.
Payload Type: Specifies the expected data format, with options:
json: Dictionary object to send in the body of the request.
raw: A JSON serializable object to send in the request's body.
Headers: Dictionary of HTTP headers to include in the request, such as
Content-Type
.Data: Dictionary containing the data to send as the request body, applicable mainly for POST, PUT, or PATCH requests.
Params: A dictionary of query parameters to append to the URL.
Input
url: Endpoint URL for the new request.
data: Dictionary containing the data to send as the request body, applicable mainly for POST, PUT, or PATCH requests.
Payload Type: Specifies the expected data format, with options:
json: Dictionary object to send in the body of the request.
raw: A JSON serializable object to send in the request's body.
headers: Dictionary containing additional headers to include in the request.
params: Dictionary of query parameters to append to the URL.
Output
content: The main content of the response, returned in the format specified by the Response Type configuration
json: Returns the response as a
dict
.text: Returns the response as a
string
.raw: Returns the response as raw binary data (
bytes
).
status_code: The HTTP status code returned by the API, indicating the success or failure of the request.
Example:
{
"content": {
"data": {
"id": 1,
"name": "Sample Data"
}
},
"status_code": 200
}
Connection Configuration

Type: Http
Name: Customizable name for identifying this connection.
URL: API URL (e.g.
https://api.openai.com/v1/
for OpenAI).Method: HTTP method (GET, POST, etc.).
Headers: Dictionary containing additional headers to include in the request.
Params: Dictionary of query parameters to append to the URL.
Data: Dictionary containing the data to send as the request body, applicable mainly for POST, PUT, or PATCH requests.
Priority of data, headers, and params
The HTTP API Call node allows flexibility in defining data, headers, and params through multiple sources with the specific order of priority:
Input: The highest priority and supplied dynamically during execution, e.g., from another node in the workflow. If data, headers, or params are specified here, they will override any values set in the node configuration or connection.
Node Configuration: Set directly in the node’s settings during workflow design. If no values are provided in the input, the node uses values defined directly in the node configuration.
Connection: Defined within the HTTP connection. Default values set in the connection (like base URL or default headers) are used only if neither the input data nor the node configuration specifies them. This is particularly useful for setting default headers, authentication tokens, or base parameters that apply to all requests made using the connection.
Example of Priority Usage
Consider an HTTP API Request node configured as follows:
Connection:
Headers:
{
"Authorization": "Bearer default_token"
}
Params:
{
"api_version": "v1"
}
Node Configuration:
Headers:
{
"Content-Type": "application/json"
}
Params:
{
"locale": "en-US"
}
Input (provided at runtime):
Headers:
{
"Authorization": "Bearer runtime_token"
}
Params:
{
"api_version": "v1",
"debug": "true"
}
Given this setup, the functionality will prioritize and combine the values as follows:
Headers:
{
"Authorization": "Bearer runtime_token",
"Content-Type": "application/json"
}
Params:
{
"locale": "fr-FR",
"api_version": "v1",
"debug": "true"
}
Usage Example

Add an Input node and specify data, headers, and query parameters to provide for the request dynamically.
Drag a HTTP API Call node into the workspace and connect it to the Input node. Set the desired configuration.
Attach a downstream node (e.g. Output) to handle the API Call content and status_code.
E2B Interpreter

The E2B Interpreter node enables users to interact with an E2B sandbox environment for executing Python code, running shell commands, and managing files within a secure, isolated environment. This node is ideal for workflows that require on-demand computation, filesystem operations, and API requests.
Currently, the E2B Interpreter node is optimized for use with Agents, offering full functionality for dynamic code execution, shell command processing, and file management. While it can be used in workflows, some configurations and advanced features are still under development for seamless integration in workflow settings.
Configuration
Name: Customizable name for identifying this node.
Connection: Configuration for connecting to the E2B environment.
Description: Short description of the node’s functionality.
Optimized for Agents: When enabled, formats the output specifically for AI agents, structuring the results in a format that agents can easily parse, interpret, and display.
Input
input: The main input field for specifying either a shell command, Python code, or package installations.
Output
Content:
Standard (if Optimized for Agents disabled) -
dict
output with the following fields:files_installation: Details about uploaded files.
packages_installation: Installed packages.
shell_command_execution: Shell command output.
code_execution: Python code execution result.
Example:
{ "files_installation": "File uploaded: example.txt -> /sandbox/example.txt", "packages_installation": "Installed packages: requests", "shell_command_execution": "total 8\ndrwxr-xr-x 2 root root 4096 Nov 1 10:00 .\ndrwxr-xr-x 1 root root 4096 Nov 1 10:00 ..", "code_execution": "{\"key\": \"value\"}" }
Optimized for Agents - formatted
string
Example:
<Files installation> File uploaded: example.txt -> /sandbox/example.txt </Files installation> <Package installation> Installed packages: requests </Package installation> <Shell command execution> total 8 drwxr-xr-x 2 root root 4096 Nov 1 10:00 . drwxr-xr-x 1 root root 4096 Nov 1 10:00 .. </Shell command execution> <Code execution> {"key": "value"} </Code execution>
Connection Configuration

Type: e2b
Name: Customizable name for identifying this connection.
API key: Your API key.
SQL Executor

The SQL Executor node is used for SQL query execution. It supports retrieving data and performing various database operations with MySQL, PostgreSQL, Redshift, and Snowflake. This node allows the query to be set dynamically, either as input data or as part of the node configuration.
Configuration
Name: Customizable name for identifying this node.
Connection: Configuration for connecting to the database.
Description: Short description of the node’s functionality.
Query: The SQL statement to execute.
Input
query: The main input field for specifying SQL statement to execute.
Output
Content:
Standard (if Optimized for Agents disabled) - dict output with the content of the response, returned as a list of dictionaries if the query retrieves data (e.g., a SELECT statement). Otherwise, it returns an empty list upon successful execution (e.g., INSERT, UPDATE, DELETE).
Example without data retrieving:
[] - MySQL, PostgreSQL, Redshift DBs [{"number of rows inserted": 2}]} - Snowflake DB
Example with data retrieving:
{"Name": "Row1Name", "Description": "Row1Description"}, {"Name": "Row2Name", "Description": "Row2Description"}, {"Name": "Row3Name", "Description": "Row3Description"}, {"Name": "Row4Name", "Description": "Row4Description"}, ]
Optimized for Agents - formatted string
Example with data retrieving and non-empty list:
"Row 1 Name: Row1Name Description: Row1Description Row 2 Name: Row2Name Description: Row2Description Row 3 Name: Row3Name Description: Row3Description Row 4 Name: Row4Name Description: Row4Description"
Example without data retrieving and empty list:
"Query "INSERT INTO test (`Name`, `Description`) VALUES ('Row1Name', 'Row1Description'), ('Row2Name', 'Row2Description');" executed successfully. No results returned."
Connection Configuration
This node supports four different types of connections, each with its own specific parameters.


Amazon Redshift connection
Type: Amazon Redshift.
Name: Customizable name for identifying this connection.
Host: Database server address.
Port: Connection port number, set to 5439 by default.
Database: Name of the database.
Username: Database user name.
Password: Database access password.
MySQL connection
Type: MySQL.
Name: Customizable name for identifying this connection.
Host: Database server address.
Port: Connection port number, set to 3306 by default.
Database: Name of the database.
Username: Database user name.
Password: Database access password.


PostgreSQL connection
Type: PostgreSQL.
Name: Customizable name for identifying this connection.
Host: Database server address.
Port: Connection port number, set to 5432 by default.
Database: Name of the database.
Username: Database user name.
Password: Database access password.
Snowflake connection
Type: MySQL.
Name: Customizable name for identifying this connection.
User: Database login name.
Password: User authentication password.
Account: Snowflake account identifier.
Warehouse: Compute warehouse name.
Database: Name of the database.
Schema: Database schema name.
Last updated