# Tools and External Integrations

In our workflow builder, there are several tool nodes specifically designed to handle tasks like web scraping, online searching, and making HTTP requests. These nodes offer flexibility in integrating external data sources, enabling workflows to retrieve, process, and utilize information from the web seamlessly.

## Tavily (seach)

<figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2F4qyov28Aacm03QW61iTp%2Ftavily.png?alt=media&#x26;token=6a2a2904-c6b3-47b4-a823-fe1d0dbf2951" alt="" width="563"><figcaption><p>Tavily search tool node</p></figcaption></figure>

The **Tavily** tool node is a specialized tool for web search, powered by the Tavily search service. This node allows you to execute search queries with customizable parameters, retrieve relevant web content, and integrate the results directly into your workflows.&#x20;

### **Configuration** <a href="#tavily-configuration" id="tavily-configuration"></a>

* **Name**: Customizable name for identifying this node.
* **Connection**: The connection configuration for Tavily.
* **Description**: Short description of the node’s functionality.
* **Optimized for Agents**: When enabled, formats the output specifically for AI agents, structuring the results in a format that agents can easily parse, interpret, and display.

### **Input** <a href="#tavily-input" id="tavily-input"></a>

* **query**: A `string` containing the search query to execute.

### **Output** <a href="#tavily-output" id="tavily-output"></a>

* **Content**:&#x20;
  * **Standard (**&#x69;f **optimized for Agents** disabled) **-** `dict` output with the following fields:

    * **result**: Formatted string output of the search results, including source URL, title, content, and relevance score.
    * **sources\_with\_url**: List of sources with URLs in a clickable format, useful for quickly accessing original pages.
    * **raw\_response**: Full raw JSON response from Tavily, useful for advanced data processing.
    * **images**: List of image URLs.
    * **answer**: Short answer generated by Tavily for the search query (if available).
    * **query**: Echo of the original search query.
    * **response\_time**: The time taken for Tavily to respond to the query.

    Example:

    ```json
    {
        "result": "Formatted search results",
        "sources_with_url": ["[AI News](https://example.com/article1)", "[Tech Updates](https://example.com/article2)"],
        "raw_response": { ... },
        "images": ["https://example.com/image1.jpg", "https://example.com/image2.jpg"],
        "answer": "Artificial Intelligence continues to evolve rapidly, with key developments in machine learning and natural language processing.",
        "query": "latest advancements in AI technology",
        "response_time": 0.23
    }
    ```
  * **Optimized for Agents -** formatted `string`

    Example:

    ```xml
    <Sources with URLs>
    [AI News](https://example.com/article1)
    [Tech Updates](https://example.com/article2)
    <\Sources with URLs>

    <Search results for query latest advancements in AI technology>
    Source: https://example.com/article1
    Title: AI News
    Content: Details about AI advancements...
    Relevance Score: 0.95
    ...
    <\Search results for query latest advancements in AI technology>

    <Answer>
    Artificial Intelligence continues to evolve rapidly, with key developments in machine learning and natural language processing.
    <\Answer>
    ```

### **Connection Configuration** <a href="#tavily-connection-configuration" id="tavily-connection-configuration"></a>

<figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2FIBJPveHH0EuI33mlFvNs%2Ftavily-conn.png?alt=media&#x26;token=442019a5-3d6e-4c4d-bff0-f9cac53077b3" alt="" width="375"><figcaption><p>Tavily connection</p></figcaption></figure>

* **Type**: Tavily
* **Name**: Customizable name for identifying this connection.
* **API key**: Your API key

{% hint style="info" %}
To get API key please follow official [Tavily documentation](https://docs.tavily.com/docs/welcome#getting-started).
{% endhint %}

### **Usage Example** <a href="#tavily-usage-example" id="tavily-usage-example"></a>

<figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2FNPLZHZxjkl5uuoNIe9FJ%2Ftavily-flow.png?alt=media&#x26;token=96e47927-9399-4975-ba17-14d45e0c9e70" alt=""><figcaption><p>Tavily workflow</p></figcaption></figure>

1. Add an **Input** node and specify query.
2. Drag a **Tavily** node into the workspace and connect it to the **Input** node. Set the desired configuration.
3. Attach a downstream node (e.g. **Output**) to handle the search content.

## Jina (search)

<figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2FMsHKWzbcINZFiIPLESZJ%2Fimage.png?alt=media&#x26;token=6bd0c4d0-a906-4158-bbd1-2c9ac31e8bdd" alt=""><figcaption></figcaption></figure>

The **Jina Searcher** tool node allows users to perform web searches using the **Jina AI API**, with support for semantic and neural search.

It accepts a query and optional parameters, returning relevant results based on content similarity. Ideal for workflows needing smarter search beyond keyword matching.

### **Configuration** <a href="#scaleserp-configuration" id="scaleserp-configuration"></a>

* **Name**: Customizable name for identifying this node.
* **Connection**: The connection configuration for Jina.
* **Description (for Agents)**: Short description of the node’s functionality.
* **Query:** A `string` containing the search query. This can also be dynamically provided in the **Input**.
* **Max results:** Maximum number of results to return.
* **Include images:**  A Boolean value that marks whether image results should be included.
* **Include full content:** A Boolean value that marks whether to return the full content or just a summary/snippet.

### **Input** <a href="#scaleserp-input" id="scaleserp-input"></a>

* **query**: A `string` containing the search query. If it's missing, the query from the configuration will be used.

### **Output** <a href="#scaleserp-output" id="scaleserp-output"></a>

* **Content**:&#x20;
  * **Standard -** `dict` output with the following fields:
    * **result**: Formatted string output of the search results, including title, URL, and content description.
    * **sources\_with\_url**: List of sources in \[Title]\(URL) format for quick access to original pages.
    * **images:** dictionary with name and url for all images.
    * **raw\_response**: Full raw JSON response from Jina AI, useful for advanced data analysis and processing.

      Example:

      ```json
      {
          "result": "Title: Tech News Today\nLink: https://example.com/article1\nSnippet: Highlights of today's tech news...\n\nTitle: Innovation Daily\nLink: https://example.com/article2\nSnippet: Key developments in the tech industry...\n\nTitle: Future Insights\nLink: https://example.com/article3\nSnippet: Upcoming tech trends...",
          "sources_with_url": [
              "[Tech News Today](https://example.com/article1)",
              "[Innovation Daily](https://example.com/article2)",
              "[Future Insights](https://example.com/article3)"
          ],
          "images": {},
          "raw_response": {...}
      }
      ```
  * **Optimized for Agents -** formatted `string`

    Example:

    ```xml
    <Sources with URLs>
    [Tech News Today](https://example.com/article1)
    [Innovation Daily](https://example.com/article2)
    <\Sources with URLs>

    <Search results for query tech news>
    Title: Tech News Today
    Link: https://example.com/article1
    Description: Highlights of today's tech news...

    Title: Innovation Daily
    Link: https://example.com/article2
    Description: Key developments in the tech industry...
    <\Search results>
    ```

### **Connection Configuration** <a href="#scaleserp-connection-configuration" id="scaleserp-connection-configuration"></a>

<figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2FNq7ICqouRQJAi1mFD6UR%2Fimage.png?alt=media&#x26;token=8e3c5971-3977-492f-ab00-3aacd6e43b73" alt=""><figcaption></figcaption></figure>

* **Type**: Jina
* **Name**: Customizable name for identifying this connection.
* **API key**: Your API key

## ScaleSerp (search)

<figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2FsMZmffmFYPfAmxb01tjK%2Fscaleserp.png?alt=media&#x26;token=a333d649-dab6-4b18-b76b-cd89b3d6ce6b" alt="" width="563"><figcaption><p>ScaleSerp search tool node</p></figcaption></figure>

The **ScaleSerp** tool node enables users to perform web searches using the Scale SERP API. It is designed to retrieve search results across various search types, such as organic web search, news, images, and videos. This tool is ideal for workflows that need dynamic web search capabilities and flexible data retrieval.

### **Configuration** <a href="#scaleserp-configuration" id="scaleserp-configuration"></a>

* **Name**: Customizable name for identifying this node.
* **Connection**: The connection configuration for ScaleSerp.
* **Description**: Short description of the node’s functionality.
* **Optimized for Agents**: When enabled, formats the output specifically for AI agents, structuring the results in a format that agents can easily parse, interpret, and display.

### **Input** <a href="#scaleserp-input" id="scaleserp-input"></a>

* **query**: A `string` containing the search query to execute.

### **Output** <a href="#scaleserp-output" id="scaleserp-output"></a>

* **Content**:&#x20;
  * **Standard (**&#x69;f **Optimized for Agents** disabled) **-** `dict` output with the following fields:
    * **result**: Formatted string output of the search results, including title, URL, and content snippets.
    * **sources\_with\_url**: List of sources in \[Title: (URL)] format for quick access to original pages.
    * **urls**: List of URLs of the search results.
    * **raw\_response**: Full raw JSON response from Scale SERP, useful for advanced data analysis and processing.

      Example:

      ```json
      {
          "result": "Title: Tech News Today\nLink: https://example.com/article1\nSnippet: Highlights of today's tech news...\n\nTitle: Innovation Daily\nLink: https://example.com/article2\nSnippet: Key developments in the tech industry...\n\nTitle: Future Insights\nLink: https://example.com/article3\nSnippet: Upcoming tech trends...",
          "sources_with_url": [
              "Tech News Today: (https://example.com/article1)",
              "Innovation Daily: (https://example.com/article2)",
              "Future Insights: (https://example.com/article3)"
          ],
          "urls": [
              "https://example.com/article1",
              "https://example.com/article2",
              "https://example.com/article3"
          ],
          "raw_response": {...}
      }
      ```
  * **Optimized for Agents -** formatted `string`

    Example:

    ```xml
    <Sources with URLs>
    [Tech News Today: (https://example.com/article1)]
    [Innovation Daily: (https://example.com/article2)]
    <\Sources with URLs>

    <Search results>
    Title: Tech News Today
    Link: https://example.com/article1
    Snippet: Highlights of today's tech news...

    Title: Innovation Daily
    Link: https://example.com/article2
    Snippet: Key developments in the tech industry...
    <\Search results>
    ```

### **Connection Configuration** <a href="#scaleserp-connection-configuration" id="scaleserp-connection-configuration"></a>

<figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2FVhhfgQYbuwLhpYAYMqaO%2Fscaleserp-conn.png?alt=media&#x26;token=93836f4a-6daa-47af-9984-4a463646c4fc" alt="" width="375"><figcaption><p>ScaleSerp connection</p></figcaption></figure>

* **Type**: ScaleSerp
* **Name**: Customizable name for identifying this connection.
* **API key**: Your API key

{% hint style="info" %}
To get API key please follow official [ScaleSerp documentation](https://docs.trajectdata.com/scaleserp).
{% endhint %}

### **Usage Example** <a href="#scaleserp-usage-example" id="scaleserp-usage-example"></a>

<figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2FELfiq9PoOxpUphwfHb2r%2Fscaleserp-flow.png?alt=media&#x26;token=e08c9a8c-71fb-48d6-9500-150c75fdc183" alt=""><figcaption><p>ScaleSerp workflow</p></figcaption></figure>

1. Add an **Input** node and specify query in input field.
2. Drag a **ScaleSerp** node into the workspace and connect it to the **Input** node. Set the desired configuration.
3. Attach a downstream node (e.g. **Output**) to handle the search content.

## ZenRows (scraping)

<figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2Fy7UkhQw3fFSxXl48wIzC%2Fzenrows.png?alt=media&#x26;token=27099a24-b9b6-4b22-9eac-3cbbf0bc4e41" alt="" width="563"><figcaption><p>ZenRows scraping tool node</p></figcaption></figure>

The **ZenRows** tool node provides a powerful solution for web scraping, allowing users to extract content from web pages using the ZenRows service and is ideal for integrating scraped content directly into workflows for further processing or analysis.

### **Configuration** <a href="#zenrows-configuration" id="zenrows-configuration"></a>

* **Name**: Customizable name for identifying this node.
* **Connection**: The connection configuration for Tavily.
* **Description**: Short description of the node’s functionality.
* **URL**: The URL of the page to scrape. This also can be dynamically provided in the **Input**.
* **Optimized for Agents**: When enabled, formats the output specifically for AI agents, structuring the results in a format that agents can easily parse, interpret, and display.

### **Input** <a href="#zenrows-input" id="zenrows-input"></a>

* **URL**: A `string` containing the URL of the page to scrape. If it missed, try to use URL from the configuration.

### **Output** <a href="#zenrows-output" id="zenrows-output"></a>

* **Content**:&#x20;
  * **Standard (**&#x69;f **Optimized for Agents** disabled) **-** `dict` output with the following fields:

    * **url**: The URL of the scraped page.
    * **content**: The main content of the page in Markdown format.

    Example:

    ```json
    {
        "url": "https://example.com/article",
        "content": "# Article Title\n\nThis is the introductory paragraph of the article...\n\n## Key Points\n\n- Point 1: Description of the first point.\n- Point 2: Description of the second point.\n\nConclusion with some final remarks."
    }
    ```
  * **Optimized for Agents -** formatted `string`

    Example:

    ```xml
    <Source URL>
    https://example.com/article
    <\Source URL>

    <Scraped result>
    # Article Title

    This is the introductory paragraph of the article...

    ## Key Points

    - Point 1: Description of the first point.
    - Point 2: Description of the second point.
    - Conclusion with some final remarks.
    <\Scraped result>
    ```

### **Connection Configuration** <a href="#zenrows-connection-configuration" id="zenrows-connection-configuration"></a>

<figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2FUkzGMhJCjPCxfBjYrHj8%2Fzenrows-conn.png?alt=media&#x26;token=5254a829-7d55-4e13-9b2e-60610ff10d8d" alt="" width="375"><figcaption><p>ZenRows connection</p></figcaption></figure>

* **Type**: ZenRows
* **Name**: Customizable name for identifying this connection.
* **API key**: Your API key

{% hint style="info" %}
To get API key please follow official [ZenRows documentation](https://docs.zenrows.com/scraper-api/scraper-api-setup#getting-started).
{% endhint %}

### **Usage Example** <a href="#zenrows-usage-example" id="zenrows-usage-example"></a>

<figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2FSzwurgo9weDYknZHedgd%2Fzenrows-flow.png?alt=media&#x26;token=8d488a47-d7b1-4267-abe4-6f2974f7932e" alt=""><figcaption><p>Workflow with url specified in Input node</p></figcaption></figure>

* Workflow with url specified in **Input** node
  1. Add an **Input** node and specify url.
  2. Drag a **ZenRows** node into the workspace and connect it to the **Input** node. Set the desired configuration.
  3. Attach a downstream node (e.g. **Output**) to handle the search content.

<figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2FODFCXvB8QVeCwLSQ9nQ7%2Fzenrows-flow-2.png?alt=media&#x26;token=2d5821a0-8535-46be-9314-7251480402a4" alt=""><figcaption><p>Workflow with url specified in ZenRows node</p></figcaption></figure>

* Workflow with url specified in **ZenRows** node
  1. Add an **Input** node.
  2. Drag a **ZenRows** node into the workspace and connect it to the **Input** node. Set the desired configuration and **URL.**
  3. Attach a downstream node (e.g. **Output**) to handle the search content.

## Jina (scraping)

<figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2FR9yH412HCbBVWX0ih22C%2Fimage.png?alt=media&#x26;token=0cb5e469-19ac-4d2b-961d-5c297832bd3a" alt=""><figcaption></figcaption></figure>

The Jina Scraper tool node enables users to extract content from web pages using the Jina AI API, supporting both structured and unstructured data extraction.\
Ideal for workflows that require automated ingestion of web content for downstream processing like summarization, embedding, or indexing.

### **Configuration** <a href="#scaleserp-configuration" id="scaleserp-configuration"></a>

* **Name**: Customizable name for identifying this node.
* **Connection**: The connection configuration for Jina.
* **Description (for Agents)**: Short description of the node’s functionality.
* **URL:** A `string` containing the url to scrape. This can also be dynamically provided in the **Input**.
* **Response format:**
  * **Default**: The default pipeline optimized for most websites and suitable for LLM input. It applies readability filtering to extract clean, relevant content.
  * **Markdown**: Extracts and returns the page content in Markdown format, bypassing readability filtering for raw HTML-to-Markdown conversion.
  * **HTML :** Returns the full raw HTML content of the page using `document.documentElement.outerHTML`.
  * **Text:** Returns plain text extracted from the page using `document.body.innerText`, ideal for simple text-based processing.
  * **Screenshot:** Captures and returns the image URL of the visible portion of the page (i.e., the first screen).
  * **Pageshot:** Captures and returns the image URL of a full-page screenshot (best-effort rendering of the entire scrollable content).

### **Input** <a href="#scaleserp-input" id="scaleserp-input"></a>

* **URL**: A `string` containing the URL of the page to scrape. If it\`s missing, the URL from the configuration will be used.

### **Output** <a href="#scaleserp-output" id="scaleserp-output"></a>

* **Content**:&#x20;
  * **Standard -** `dict` output with the following fields:

    * **url**: The URL of the scraped page.
    * **content**: The main content of the page, in the format chosen in the configuration.

    Example:

    ```json
    {
        "url": "https://example.com/article",
        "content": "# Article Title\n\nThis is the introductory paragraph of the article...\n\n## Key Points\n\n- Point 1: Description of the first point.\n- Point 2: Description of the second point.\n\nConclusion with some final remarks."
    }
    ```
  * **Optimized for Agents -** formatted `string`

    Example:

    ```xml
    <Source URL>
    https://example.com/article
    <\Source URL>

    <Scraped result>
    # Article Title

    This is the introductory paragraph of the article...

    ## Key Points

    - Point 1: Description of the first point.
    - Point 2: Description of the second point.
    - Conclusion with some final remarks.
    <\Scraped result>
    ```

### **Connection Configuration** <a href="#scaleserp-connection-configuration" id="scaleserp-connection-configuration"></a>

<figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2FNq7ICqouRQJAi1mFD6UR%2Fimage.png?alt=media&#x26;token=8e3c5971-3977-492f-ab00-3aacd6e43b73" alt=""><figcaption></figcaption></figure>

* **Type**: Jina
* **Name**: Customizable name for identifying this connection.
* **API key**: Your API key

## FireCrawl (scraping)

<figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2F9xn7TkT0cNWE77cMvpJn%2Ffirecrawl.png?alt=media&#x26;token=be980b9b-bc06-4c70-89ce-f0b4556ac6a6" alt="" width="563"><figcaption><p>FireCrawl scrapping tool node</p></figcaption></figure>

The **FireCrawl** tool node is a web scraping tool powered by FireCrawl. It enables users to extract structured content from web pages. This tool is ideal for workflows requiring flexible web data extraction capabilities.

### **Configuration** <a href="#firecrawl-configuration" id="firecrawl-configuration"></a>

* **Name**: Customizable name for identifying this node.
* **Connection**: The connection configuration for Tavily.
* **Description**: Short description of the node’s functionality.
* **URL**: The URL of the page to scrape. This also can be dynamically provided in the **Input**.
* **Optimized for Agents**: When enabled, formats the output specifically for AI agents, structuring the results in a format that agents can easily parse, interpret, and display.

### **Input** <a href="#firecrawl-input" id="firecrawl-input"></a>

* **URL**: A `string` containing the URL of the page to scrape. If it missed, try to use URL from the configuration.

### **Output** <a href="#firecrawl-output" id="firecrawl-output"></a>

* **Content**:&#x20;
  * **Standard (**&#x69;f **Optimized for Agents** disabled) **-** `dict` output with the following fields:
    * **success**: Boolean indicating whether the scraping was successful.
    * **url**: The URL of the scraped page.
    * **markdown**: Content in Markdown format (if applicable).
    * **content**: Main text content of the page.
    * **html**: Full HTML content if Include HTML was enabled.
    * **raw\_html**: Raw HTML if extracted without additional processing.
    * **metadata**: Metadata information extracted from the page.
    * **llm\_extraction**: Extracted content optimized for language model processing.
    * **warning**: Any warnings or messages returned during the scraping process.

      Example:

      ```json
      {
          "success": true,
          "url": "https://example.com/news",
          "content": "Full article content in plain text",
          "llm_extraction": {
              "summary": "This news article covers recent developments in tech.",
              "key_points": [
                  "AI advancements in 2023.",
                  "Growth in renewable energy adoption.",
                  "Cybersecurity challenges in modern industries."
              ]
          },
          "html": "<html><body>...</body></html>",
          "markdown": "",
          "metadata": {},
          "warning": null
      }
      ```
  * **Optimized for Agents -** formatted `string`

    Example:

    ```xml
    <Source URL>
    https://example.com/news
    <\Source URL>

    <LLM Extraction>
    Summary: This news article covers recent developments in tech.
    Key Points:
    - AI advancements in 2023.
    - Growth in renewable energy adoption.
    - Cybersecurity challenges in modern industries.
    <\LLM Extraction>
    ```

### **Connection Configuration** <a href="#firecrawl-connection-configuration" id="firecrawl-connection-configuration"></a>

<figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2FsssDUdUOi0yvG6jjbnsL%2Ffirecrawl-conn.png?alt=media&#x26;token=fe64bd8d-da24-4d26-b8b1-cd7770365c77" alt="" width="375"><figcaption><p>FireCrawl connection</p></figcaption></figure>

* **Type**: FireCrawl
* **Name**: Customizable name for identifying this connection.
* **API key**: Your API key.&#x20;

{% hint style="info" %}
To get API key please follow official [FireCrawl documentation](https://docs.firecrawl.dev/introduction#api-key).
{% endhint %}

### **Usage Example** <a href="#firecrawl-usage-example" id="firecrawl-usage-example"></a>

<figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2F71EEUAj03dJfOJjAGgHn%2Ffirecrawl-flow-2.png?alt=media&#x26;token=9d8e34f0-e09b-4fa3-bac0-c28c37b6ea6c" alt=""><figcaption><p>Workflow with url specified in Input node</p></figcaption></figure>

* Workflow with url specified in **Input** node:
  1. Add an **Input** node and specify url.
  2. Drag a **FireCrawl** node into the workspace and connect it to the **Input** node. Set the desired configuration.
  3. Attach a downstream node (e.g. **Output**) to handle the search content.

<figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2FtICk8YxaiO0PyEsqb4D9%2Ffirecrawl-flow.png?alt=media&#x26;token=5b9facf9-027d-4c1e-888d-4179b7d4d0ee" alt=""><figcaption><p>Workflow with url specified in FireCrawl node</p></figcaption></figure>

* Workflow with url specified in **FireCrawl** node:
  1. Add an **Input** node.
  2. Drag a **FireCrawl** node into the workspace and connect it to the **Input** node. Set the desired configuration and **URL.**
  3. Attach a downstream node (e.g. **Output**) to handle the search content.

## HTTP API Call

<figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2FgxRZtNBvKY5RqG0X9Ilk%2Fimage.png?alt=media&#x26;token=927e2375-2519-4ed6-b1af-785f1723b1cb" alt=""><figcaption><p>HTTP API CALL node</p></figcaption></figure>

The **HTTP API Call** node is a versatile tool for making HTTP API requests. This node allows for dynamic configuration of request parameters, headers, and data payload, making it suitable for interacting with various APIs. The node is flexible and can handle responses in JSON, text, or raw data format.

### **Configuration** <a href="#http-api-call-configuration" id="http-api-call-configuration"></a>

* **Name**: Customizable name for identifying this node.
* **Connection**: Connection to the HTTP service, specifying the base URL and HTTP method (GET, POST, etc.).
* **Description**: Short description of the node’s functionality.
* **Response Type**: Specifies the expected response format, with options:
  * **json**: Parses the response as JSON and returns it as a `dict`.
  * **text**: Returns the response as a plain text `string`.
  * **raw**: Returns the raw binary response (`bytes`), useful for non-text responses (e.g., images).
* **Timeout**: Sets the maximum time in seconds the node will wait for a response, with a default of 30 seconds.
* **Url:** Endpoint URL for the new request.
* **Payload Type:** Specifies the expected data format, with options:
  * **json:** Dictionary object to send in the body of the request.
  * **raw:** A JSON serializable object to send in the request's body.
* **Headers**: Dictionary of HTTP headers to include in the request, such as `Content-Type`.
* **Data**: Dictionary containing the data to send as the request body, applicable mainly for POST, PUT, or PATCH requests.
* **Params**: A dictionary of query parameters to append to the URL.

### **Input** <a href="#http-api-call-input" id="http-api-call-input"></a>

* **url:** Endpoint URL for the new request.
* **data**: Dictionary containing the data to send as the request body, applicable mainly for POST, PUT, or PATCH requests.
* **Payload Type:** Specifies the expected data format, with options:
  * **json:** Dictionary object to send in the body of the request.
  * **raw:** A JSON serializable object to send in the request's body.
* **headers**: Dictionary containing additional headers to include in the request.
* **params**: Dictionary of query parameters to append to the URL.

### **Output** <a href="#http-api-call-output" id="http-api-call-output"></a>

* **content**: The main content of the response, returned in the format specified by the Response Type configuration
  * json: Returns the response as a `dict`.
  * text: Returns the response as a `string`.
  * raw: Returns the response as raw binary data (`bytes`).
* **status\_code**: The HTTP status code returned by the API, indicating the success or failure of the request.

Example:

```json
{
    "content": {
        "data": {
            "id": 1,
            "name": "Sample Data"
        }
    },
    "status_code": 200
}
```

### **Connection Configuration** <a href="#http-api-call-connection-configuration" id="http-api-call-connection-configuration"></a>

<figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2FniblpSTxl9SWJcWDcV2q%2Fhttp-conn.png?alt=media&#x26;token=c0b05729-2f99-418a-80b8-318808eed5a1" alt="" width="373"><figcaption><p>Http connection</p></figcaption></figure>

* **Type**: Http
* **Name**: Customizable name for identifying this connection.
* **URL**:  API URL (e.g. `https://api.openai.com/v1/` for OpenAI).
* **Method:** HTTP method (GET, POST, etc.).
* **Headers**: Dictionary containing additional headers to include in the request.
* **Params**: Dictionary of query parameters to append to the URL.
* **Data**: Dictionary containing the data to send as the request body, applicable mainly for POST, PUT, or PATCH requests.

### Priority of data, headers, and params <a href="#http-api-call-priority" id="http-api-call-priority"></a>

The **HTTP API Call** node allows flexibility in defining data, headers, and params through multiple sources with the specific order of priority:

1. **Input**: The highest priority  and supplied dynamically during execution, e.g., from another node in the workflow.  If data, headers, or params are specified here, they will override any values set in the node configuration or connection.
2. **Node Configuration**: Set directly in the node’s settings during workflow design. If no values are provided in the input, the node uses values defined directly in the node configuration.
3. **Connection**: Defined within the HTTP connection. Default values set in the connection (like base URL or default headers) are used only if neither the input data nor the node configuration specifies them. This is particularly useful for setting default headers, authentication tokens, or base parameters that apply to all requests made using the connection.

{% hint style="info" %}
When merging values from multiple sources, the node combines dictionaries for headers and params across all levels. In case of conflicts, the priority order (**Input > Node Configuration > Connection**) determines which value to use.
{% endhint %}

#### Example of Priority Usage

Consider an HTTP API Request node configured as follows:

1. **Connection**:

Headers:

```json
{
    "Authorization": "Bearer default_token"
}
```

Params:

```json
{
    "api_version": "v1"
}
```

2. **Node Configuration**:

Headers:

```json
{
    "Content-Type": "application/json"
}
```

Params:

```json
{
    "locale": "en-US"
}
```

3. **Input (provided at runtime)**:

Headers:

```json
{
    "Authorization": "Bearer runtime_token"
}
```

Params:

```json
{
    "api_version": "v1",
    "debug": "true"
}
```

Given this setup, the functionality will prioritize and combine the values as follows:

Headers:

```json
{
    "Authorization": "Bearer runtime_token",
    "Content-Type": "application/json"
}
```

Params:

<pre class="language-json"><code class="lang-json"><strong>{
</strong>    "locale": "fr-FR",
    "api_version": "v1",
    "debug": "true"
}
</code></pre>

### **Usage Example** <a href="#http-api-call-usage-example" id="http-api-call-usage-example"></a>

<figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2FbNw8lM9zaltO1pfVBr4e%2Fhttp-flow.png?alt=media&#x26;token=062b333e-271c-464d-9adc-51ba0e72c37a" alt="" width="563"><figcaption><p>HTTP API Call node workflow</p></figcaption></figure>

1. Add an **Input** node and specify **data**, **headers**, and query **parameters** to provide for the request dynamically.
2. Drag a **HTTP API Call** node into the workspace and connect it to the **Input** node. Set the desired configuration.
3. Attach a downstream node (e.g. **Output**) to handle the API Call **content** and **status\_code**.

## E2B Interpreter

<figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2FaFgspzE1aanMFehY0XUx%2Fe2b.png?alt=media&#x26;token=e7f500c2-5452-4be3-8853-fb1557497f3b" alt="" width="563"><figcaption><p>E2B Interpreter node</p></figcaption></figure>

The **E2B Interpreter** node enables users to interact with an E2B sandbox environment for executing Python code, running shell commands, and managing files within a secure, isolated environment. This node is ideal for workflows that require on-demand computation, filesystem operations, and API requests.

{% hint style="danger" %}
Currently, the **E2B Interpreter** node is optimized for use with **Agents**, offering full functionality for dynamic code execution, shell command processing, and file management. While it can be used in workflows, some configurations and advanced features are still under development for seamless integration in workflow settings.
{% endhint %}

### **Configuration** <a href="#e2b-interpreter-configuration" id="e2b-interpreter-configuration"></a>

* **Name**: Customizable name for identifying this node.
* **Connection**: Configuration for connecting to the E2B environment.
* **Description**: Short description of the node’s functionality.
* **Optimized for Agents**: When enabled, formats the output specifically for AI agents, structuring the results in a format that agents can easily parse, interpret, and display.

### **Input** <a href="#e2b-interpreter-input" id="e2b-interpreter-input"></a>

* **input**: The main input field for specifying either a shell command, Python code, or package installations.

### **Output** <a href="#e2b-interpreter-output" id="e2b-interpreter-output"></a>

* Content:&#x20;
  * **Standard (**&#x69;f **Optimized for Agents** disabled) **-** `dict` output with the following fields:

    * **files\_installation**: Details about uploaded files.
    * **packages\_installation**: Installed packages.
    * **shell\_command\_execution**: Shell command output.
    * **code\_execution**: Python code execution result.

    Example:

    ```json
    {
        "files_installation": "File uploaded: example.txt -> /sandbox/example.txt",
        "packages_installation": "Installed packages: requests",
        "shell_command_execution": "total 8\ndrwxr-xr-x 2 root root 4096 Nov 1 10:00 .\ndrwxr-xr-x 1 root root 4096 Nov 1 10:00 ..",
        "code_execution": "{\"key\": \"value\"}"
    }
    ```
  * **Optimized for Agents -** formatted `string`

    Example:

    ```xml
    <Files installation>
    File uploaded: example.txt -> /sandbox/example.txt
    </Files installation>

    <Package installation>
    Installed packages: requests
    </Package installation>

    <Shell command execution>
    total 8
    drwxr-xr-x 2 root root 4096 Nov 1 10:00 .
    drwxr-xr-x 1 root root 4096 Nov 1 10:00 ..
    </Shell command execution>

    <Code execution>
    {"key": "value"}
    </Code execution>
    ```

### **Connection Configuration** <a href="#e2b-interpreter-connection-configuration" id="e2b-interpreter-connection-configuration"></a>

<figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2FmpvqgurrkFQNNSKX2ZAF%2Fe2b-conn.png?alt=media&#x26;token=9ec5e5bb-b7ca-41aa-9b08-f6e5fbd9da63" alt="" width="375"><figcaption><p>E2B Interpreter connection</p></figcaption></figure>

* **Type**: e2b
* **Name**: Customizable name for identifying this connection.
* **API key**: Your API key.&#x20;

{% hint style="info" %}
To get API key please follow official [E2B Interpreter documentation](https://e2b.dev/docs/quickstart).
{% endhint %}

## SQL Executor

<figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2F7u18VtfGkBZC2vqVQwBc%2Fimage.png?alt=media&#x26;token=3f0535ba-3222-40c4-8372-2c0be64b65d8" alt=""><figcaption><p>SQL Executor node</p></figcaption></figure>

The **SQL Executor** node is used for SQL query execution. It supports retrieving data and performing various database operations with MySQL, PostgreSQL, Redshift, and Snowflake. This node allows the query to be set dynamically, either as input data or as part of the node configuration.

## **Configuration** <a href="#e2b-interpreter-configuration" id="e2b-interpreter-configuration"></a>

* **Name**: Customizable name for identifying this node.
* **Connection**: Configuration for connecting to the database.
* **Description**: Short description of the node’s functionality.
* **Query:** The SQL statement to execute.

## **Input** <a href="#e2b-interpreter-input" id="e2b-interpreter-input"></a>

* **query**: The main input field for specifying SQL statement  to execute.

## **Output** <a href="#e2b-interpreter-output" id="e2b-interpreter-output"></a>

* Content:&#x20;
  * **Standard (**&#x69;f **Optimized for Agents** disabled) **-** dict output with the content of the response, returned as a list of dictionaries if the query retrieves data (e.g., a SELECT statement). Otherwise, it returns an empty list upon successful execution (e.g., INSERT, UPDATE, DELETE).

    Example without data retrieving:

    ```json
    [] - MySQL, PostgreSQL, Redshift DBs

    [{"number of rows inserted": 2}]} - Snowflake DB
    ```

    Example with data retrieving:

    ```json
            {"Name": "Row1Name", "Description": "Row1Description"},
            {"Name": "Row2Name", "Description": "Row2Description"},
            {"Name": "Row3Name", "Description": "Row3Description"},
            {"Name": "Row4Name", "Description": "Row4Description"},
    ]
    ```
  * **Optimized for Agents -** formatted string

    Example with data retrieving and non-empty list:

    ```
    "Row 1
    Name: Row1Name
    Description: Row1Description

    Row 2
    Name: Row2Name
    Description: Row2Description

    Row 3
    Name: Row3Name
    Description: Row3Description

    Row 4
    Name: Row4Name
    Description: Row4Description"
    ```

    Example without data retrieving and empty list:

    ```
    "Query "INSERT INTO test (`Name`, `Description`)
        VALUES
            ('Row1Name', 'Row1Description'),
            ('Row2Name', 'Row2Description');" executed successfully. No results returned."
    ```

## **Connection Configuration**

This node supports four different types of connections, each with its own specific parameters.

<div align="left"><figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2F4dsRxrLWrsSYxBbqxAne%2Fimage.png?alt=media&#x26;token=37cea20e-4a8f-4edc-a8a0-76c8fba12b8c" alt="" width="375"><figcaption><p>Amazon Redshift connection</p></figcaption></figure> <figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2F3gKrNrpcIPUnIeNzuUaC%2Fimage.png?alt=media&#x26;token=51e23f7e-4063-4622-9c9e-d4960dcae276" alt="" width="375"><figcaption><p>MySQL connection</p></figcaption></figure></div>

### **Amazon Redshift connection**

* **Type**: Amazon Redshift.
* **Name**: Customizable name for identifying this connection.
* **Host:** Database server address.
* **Port:** Connection port number, set to 5439 by default.
* **Database:** Name of the database.
* **Username:** Database user name.
* **Password:** Database access password.

### **MySQL connection**

* **Type**: MySQL.
* **Name**: Customizable name for identifying this connection.
* **Host:** Database server address.
* **Port:** Connection port number, set to 3306 by default.
* **Database:** Name of the database.
* **Username:** Database user name.
* **Password:** Database access password.

<div align="center"><figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2FbGJa8Y4QiPjeuNVcOgOh%2Fimage.png?alt=media&#x26;token=817f526b-4937-4569-b02c-1486a5428f33" alt="" width="359"><figcaption><p>PostgreSQL connection</p></figcaption></figure> <figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2FkReKD5fbGDo8F1QCJQlP%2Fimage.png?alt=media&#x26;token=5df393ae-4952-4958-bce4-cac1720b057e" alt="" width="354"><figcaption><p>Snowflake connection</p></figcaption></figure></div>

### PostgreSQL **connection**

* **Type**: PostgreSQL.
* **Name**: Customizable name for identifying this connection.
* **Host:** Database server address.
* **Port:** Connection port number, set to 5432 by default.
* **Database:** Name of the database.
* **Username:** Database user name.
* **Password:** Database access password.

### Snowflake **connection**

* **Type**: MySQL.
* **Name**: Customizable name for identifying this connection.
* **User:** Database login name.
* **Password:** User authentication password.
* **Account**: Snowflake account identifier.
* **Warehouse**: Compute warehouse name.
* **Database**: Name of the database.
* **Schema**: Database schema name.
