LogoLogo
Documentation
Documentation
  • Getting Started
    • Introduction
    • Sign up to Developer Edition
    • Build Your First Agent
    • Developer Support
  • Core Concepts
    • Agent
      • Knowledge
      • Webhook
    • PII Masking
    • Sub-Agent
    • Intent
    • Workflow
      • Node
        • Input
        • Output
        • Loader
        • Display
        • API Node
        • Web Crawler
        • Table Write
        • Table Read
        • Ruleset
        • Upload Document
        • Javascript
        • Workflow
        • Loop
        • Document To Image
        • External Database
        • Storage Write
        • Storage Read
        • Fetch Document
        • Prompt
        • RAG Query
        • Vector Search
        • Emit Event
    • RAG
    • Model Hub
      • Entity Recognizers
    • Data Gateway
    • Rulesets
    • Code Snippets
    • Tables
    • Storage
    • Widget
  • Overview of GenAI
    • Introduction
    • Key concepts
      • Intent Classification
      • Inference
      • Generative AI Models
      • Large Language Models (LLMs)
      • Prompt Engineering
      • AI Agents
      • RAG (Retrieval Augmented Generation)
      • AI Workflow Automation
      • AI Agents vs LLM-based APPs
Powered by GitBook
On this page
  • Overview
  • Configurations
  • Example Use-Cases
  • Key Takeaways for Developers
Export as PDF
  1. Core Concepts
  2. Workflow
  3. Node

External Database

Overview

The External Database Node in UPTIQ Workbench enables developers to connect their workflows to external databases of their choice for persistent storage, data retrieval, and data manipulation. This node offers an alternative to the Tables feature in UPTIQ by providing a generic interface for interacting with a variety of external databases, including MongoDB, SQL, PostgreSQL, Oracle, and BigQuery.

With the External Database Node, developers can: ✅ Perform CRUD operations (Create, Read, Update, Delete) across supported databases. ✅ Integrate existing external data sources into AI agent workflows. ✅ Store and retrieve data from custom databases for more flexibility.

Configurations

Database Type
Supported Operations
Configuration Fields

MongoDB

CRUD (Read, Write)

Database URI, Database Name, Collection Name, Filters, Projections, Data (for Write)

SQL

Query Execution

Database URI, Database Name, Query

PostgreSQL

Query Execution

Database URI, Database Name, Query

Oracle

Query Execution

Database URI, User, Password, Query

BigQuery

Query Execution

Project ID, Client Email, Private Key, Query

MongoDB Configuration Details

  • Database URI: Specify the URI to connect to the MongoDB instance.

  • Database Name: Name of the MongoDB database.

  • Collection Name: Collection to read from or write to.

  • Filters: JSON object specifying which documents to retrieve or modify. Example: { "_id": "123", "status": "active" }

  • Projections: JSON object specifying fields to include or exclude in results. Example: { "_id": 0, "name": 1 }

  • Data (for Write): JSON object or array for insert/update operations. Example: [ { "orderId": "1001", "totalAmount": 250 } ]


SQL, PostgreSQL, and Oracle Configuration Details

  • Database URI: URI to connect to the database.

  • Database Name: Name of the database.

  • Query: SQL query to execute. Example: SELECT * FROM orders WHERE status = 'completed';


BigQuery Configuration Details

  • Project ID: Google Cloud project ID.

  • Client Email: Email associated with authentication.

  • Private Key: Private key for authentication.

  • Query: SQL query for BigQuery. Example: SELECT orderId, totalAmount FROM orders WHERE status = 'completed';


Output Format

The output is always returned in the following format:

{
  "data": any[]
}

Example Use-Cases

Use-Case 1: Retrieving Completed Orders from MongoDB

A workflow retrieves a list of completed orders from a MongoDB collection for reporting purposes.

Configuration:

Field
Value

Operation

Read

Database Type

MongoDB

Database URI

mongodb://localhost:27017

Database Name

myDatabase

Collection Name

Orders

Filters

{ "status": "completed" }

Projections

{ "_id": 0, "orderId": 1, "totalAmount": 1 }

Output:

{
  "data": [
    { "orderId": "1001", "totalAmount": 250 },
    { "orderId": "1002", "totalAmount": 320 }
  ]
}

🔹 Why use this approach? ✔ Integrates existing order data into AI workflows. ✔ Supports dynamic reporting based on external data sources.


Use-Case 2: Executing a SQL Query for Customer Insights

A workflow queries a PostgreSQL database to extract customer details for marketing purposes.

Configuration:

Field
Value

Operation

Read

Database Type

PostgreSQL

Database URI

postgresql://localhost:5432

Database Name

customerDB

Query

SELECT name, email FROM customers WHERE status = 'active';

Output:

{
  "data": [
    { "name": "John Doe", "email": "john.doe@example.com" },
    { "name": "Jane Smith", "email": "jane.smith@example.com" }
  ]
}

🔹 Why use this approach? ✔ Supports real-time data retrieval for targeted marketing campaigns. ✔ Connects AI workflows to external customer databases seamlessly.


Use-Case 3: Writing New Transactions to a MongoDB Collection

A workflow writes new transaction records into a MongoDB collection.

Configuration:

Field
Value

Operation

Write

Database Type

MongoDB

Database URI

mongodb://localhost:27017

Database Name

financialDB

Collection Name

Transactions

Data

[ { "transactionId": "TX1003", "amount": 500 } ]

Output:

{
  "data": [
    { "status": "success", "insertedCount": 1 }
  ]
}

🔹 Why use this approach? ✔ Supports flexible data storage in custom databases. ✔ Integrates transaction data into external systems seamlessly.


Key Takeaways for Developers

✅ Flexible Database Support – Connects to a wide range of external databases, including MongoDB, SQL, PostgreSQL, Oracle, and BigQuery.

✅ CRUD and Query Operations – Supports read, write, update, and delete operations, enabling dynamic data management.

✅ Seamless Integration – Acts as a generic interface, allowing developers to use their preferred databases for persistent storage or AI processing.

✅ Alternative to Tables – Provides an alternative to the UPTIQ Tables feature, offering greater flexibility with external databases.

By leveraging the External Database Node, developers can integrate real-time data from external sources into AI workflows, enhancing decision-making, and enabling scalable, data-driven automation. 🚀

PreviousDocument To ImageNextStorage Write

Last updated 3 months ago