Only this pageAll pages
Powered by GitBook
1 of 57

Documentation

Getting Started

Loading...

Loading...

Loading...

Loading...

Core Concepts

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Overview of GenAI

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Build Your First Agent

Welcome! In this session, we’ll explore how you can integrate and customize your own AI models within the Uptiq AI Workbench to build powerful, agentic workflows.

Sign up to Developer Edition

Jump into the exciting world of AI with UPTIQ AI Platform and start building powerful agents today! To sign up for the UPTIQ AI Workbench, follow these steps:

  1. Access the Platform: Visit https://console.uptiq.ai.

Sign-up
  1. Choose Your Sign-Up Method: Enter your preferred email address to register for the Developer Edition, or select the Sign Up with Google option for a quicker process.

  2. Email Verification (if applicable): If you signed up using an email address, you will receive an invitation link in your inbox.

  1. Verify Your Account: Open your email and click on the ‘Verify Email’ button to confirm and activate your account.

Once verified, you can log in and start leveraging the UPTIQ AI Workbench to build and deploy AI agents efficiently.

Verify your account.

Introduction

UPTIQ AI Workbench is an advanced platform powered by cutting-edge generative AI technology that enables financial institutions to develop and implement bespoke AI agents. These tailored agents possess the capability to understand and interpret natural language, automate a diverse array of tasks, and extract and deliver valuable insights, thereby addressing and resolving a wide spectrum of use cases effectively.

Designed to be versatile and adaptable, the UPTIQ AI Workbench is not limited to specific domains; however, it excels in offering meticulously crafted solutions particularly well-suited for the intricate needs of the enterprise financial services sector.

By leveraging the Workbench, organizations can optimize operational efficiencies, enhance decision-making processes, and ultimately generate a competitive edge within the market. The platform's ability to seamlessly integrate into existing systems further ensures a smooth transition and maximized productivity.

Consequently, businesses utilizing the UPTIQ AI Workbench are well-equipped to navigate the complexities of the financial landscape with innovative, state-of-the-art technological tools.

Common Features of UPTIQ AI Workbench:

  1. Intuitive Agent Builder: The UPTIQ AI Workbench's Agent Builder allows developers to create intelligent workflows that automate repetitive back-office tasks by combining research, content generation, and software integrations. This creates a unified data foundation and streamlines operations.

  2. Seamless Integration with Existing Systems: UPTIQ's Workbench provides a seamless integration experience, allowing you to effortlessly connect with any desired third-party system within your agent environment. By utilizing a declarative, low-code approach, Workbench simplifies the integration process with SaaS applications, enabling efficient data collection and streamlined processes.

Key Features of UPTIQ AI Workbench:

  1. Verticalized AI Workflows: The platform offers highly specialized AI workflows that efficiently solve complex problems across wealth management, banking, and fintech sectors. This vertical integration ensures that the solutions are tailored to the unique challenges of each sector.

  2. High Accuracy with No Hallucinations: UPTIQ's AI is built to avoid common issues found in large language models, such as generating incorrect or misleading outputs. By combining Retrieval-Augmented Generation (RAG) and large language models (LLMs), the platform ensures trustworthy AI outputs, which is crucial for maintaining compliance and trust in financial services.

  3. Financial Data Gateway Integration: The platform features a proprietary Financial Data Gateway that connects with over 100 software platforms housing enterprise data for financial institutions' clients. This integration delivers secure, compliant, and traceable outputs, ensuring that institutions can confidently automate processes and deliver smarter, data-driven solutions.

Security and Compliance: With a focus on regulatory compliance and data security, the UPTIQ AI Platform ensures traceability of AI outputs and full protection of sensitive financial data. This built-in compliance makes AI reliable for everyone, fully adhering to regulatory standards and practices.

PII Masking

What is PII Masking?

PII (Personally Identifiable Information) Masking is a critical feature in UPTIQ AI Workbench that ensures sensitive user data is protected during interactions with AI agents. It identifies and masks PII data in user queries before the data is passed to any LLM (Large Language Model) or other parts of the workflow. This feature allows developers to comply with data privacy regulations and build secure AI agents.

How does it work?

  1. PII Recognition:

    • The PII Masking feature uses pre-built patterns to recognize common types of PII, such as:

      • Address: Detects physical addresses in the text.

Key Features

  • Pre-Built Patterns: Quickly enable predefined PII recognition for commonly used data types.

  • Customizable: Define custom patterns for specific business or domain needs.

  • Low-Code Implementation: Activate PII Masking with minimal effort via a user-friendly interface.

  • Compliance: Helps adhere to data protection regulations like GDPR, CCPA, and HIPAA.

Where to use PII Masking?

How to use PII Masking?

  1. Enable Pre-Built Patterns:

    • Toggle on the required PII patterns (e.g., Address, Email, SSN) from the PII Masking interface.

  2. Create Custom Patterns:

    • Use the "Create Pattern" option to define new rules for identifying PII.

Key takeaways for developers

✅ Data Security: Prevents sensitive data from being mishandled or exposed.

✅ Privacy Compliance: Aligns with data protection regulations.

✅ Ease of Use: Simplifies PII protection with pre-built and customizable tools.

By leveraging PII Masking, developers can create secure and privacy-compliant AI workflows without compromising functionality or user experience.

Sub-Agent

What is a Sub-Agent?

A Sub-Agent is a specialized component within an AI Agent that handles a specific set of tasks related to a particular domain or functionality. It acts as a modular processing unit, receiving tasks from the main agent after identifying user intent.

Think of Sub-Agents as modules in a typical application—they focus on distinct functionalities within the AI system.

How Sub-Agents Work

  1. User Query Processing

    • The main agent receives the user’s request and determines the intent.

    • If the intent falls within a specific domain, it is delegated to the appropriate Sub-Agent.

  2. Task Execution by Sub-Agent

    • The Sub-Agent processes the request based on the identified user intent.

    • It may extract information, analyze data, summarize content, or answer user queries by delegating the execution to right intent.

  3. Response Generation

    • The Sub-Agent compiles the required information by executing the workflow associated with the intent and sends the response back to the main agent.

    • The main agent then formats and delivers the final response to the user.

Example: Document AI Agent

A Document AI Agent may have a Document Q&A Sub-Agent, which:

  • Handles user queries about uploaded documents.

  • Retrieves, extracts, and summarizes information from documents like paystubs, invoices, and balance sheets.

  • Ensures users get accurate responses based on the document data processed by the AI.

How to create sub-agent?

See the video below to learn how to create Sub-Agents:

Key takeaway for developers

Why Use Sub-Agents?

✅ Modular Structure: Organizes AI workflows efficiently.

✅ Scalability: Allows for adding specialized capabilities without modifying the entire agent.

✅ Improved Accuracy: Sub-agents focus on specific tasks, enhancing performance.

By using Sub-Agents, developers can build structured, scalable AI systems that efficiently handle complex, multi-domain tasks.

Data Gateway

Overview

The Data Gateway feature in UPTIQ AI enables developers to seamlessly connect their AI agents to a wide array of third-party services. By configuring authentication parameters such as API keys and OAuth credentials, developers can access both raw and normalized data through the Data Gateway's APIs, enhancing the capabilities of their AI-driven applications.

Zip Code
: Identifies zip codes.
  • Full Name: Recognizes full names.

  • Date and Time: Detects dates and times.

  • Email: Recognizes email addresses.

  • Phone Number: Identifies phone numbers.

  • SSN (Social Security Number): Detects social security numbers.

  • Custom Patterns:

    • Developers can create custom PII patterns by defining specific rules for recognition. This ensures flexibility to handle domain-specific sensitive information.

  • Masking PII:

    • Once PII data is identified, the feature masks it in real time to prevent exposure. Masked data is processed safely, ensuring that no sensitive information is passed to LLMs or other components of the workflow.

  • Integration with Workflows:

    • PII Masking is seamlessly integrated into the workflow execution. It ensures data protection without requiring additional manual intervention.

  • Integrate with Workflows:

    • Ensure workflows are configured to use PII Masking before passing data to any LLM or processing nodes.

  • Test and Validate:

    • Run test cases to confirm that all sensitive data is accurately identified and masked.

  • Node

    Overview

    In UPTIQ Workbench, Nodes are the fundamental building blocks of workflows, enabling developers to define, configure, and execute business processes in a declarative manner. Nodes represent distinct actions or tasks within a workflow, allowing seamless automation of AI-driven operations.

    A Node in UPTIQ follows the traditional concept of nodes in Business Process Automation (BPA) systems, where each node is responsible for executing a specific function—such as processing data, making decisions, fetching external resources, or displaying information to users.

    Nodes help structure complex workflows by: ✅ Modularizing logic into discrete, reusable components. ✅ Enabling automation by orchestrating tasks in a predefined sequence. ✅ Reducing code dependency by offering configuration-driven business logic.

    In essence, Nodes allow developers to build scalable, structured, and AI-enhanced workflows without writing extensive custom code.

    Categories of Nodes in UPTIQ

    Nodes in UPTIQ are divided into five major categories, each catering to a different aspect of workflow automation.

    1. Data Nodes (Database Interaction)

    These nodes provide powerful database interaction capabilities to store, retrieve, and query data efficiently. Developers can use Data Nodes to: ✅ Write, read, or query databases, including structured and unstructured data. ✅ Integrate with external databases to fetch real-time information. ✅ Store and retrieve AI-processed data for future queries.

    Example Use Cases:

    • Storing extracted financial data from invoices in a structured table.

    • Fetching historical transactions for user queries.

    2. Integration Nodes (Third-Party & External System Interaction)

    These nodes allow developers to connect workflows with external systems. UPTIQ provides Data Gateway integration, but developers can also connect directly to third-party services.

    ✅ Fetch data from external CRMs, ERPs, or financial platforms. ✅ Send data to third-party systems via APIs. ✅ Automate multi-system workflows by integrating AI agents with business applications.

    Example Use Cases:

    • Fetching vendor details from an accounting system when processing an invoice.

    • Syncing loan application status with an external credit-checking API.

    3. User Interaction Nodes (User Engagement & Frontend Interactions)

    These nodes help developers interact with users within a workflow by collecting inputs, showing outputs, or guiding users with messages.

    ✅ Capture user input dynamically (e.g., text, dropdown selection). ✅ Display messages, confirmation prompts, and loaders. ✅ Present structured data using tables, links, or charts.

    Example Use Cases:

    • Asking a user for approval before proceeding with document processing.

    • Showing a clickable link to download a financial report.

    4. AI Foundational Nodes (GenAI & AI Capabilities)

    One of the most critical node categories in UPTIQ, AI Foundational Nodes unlock Generative AI (GenAI) and advanced AI functionalities, enabling AI-driven automation.

    ✅ Enable Retrieval-Augmented Generation (RAG) to enhance AI responses with real-time data. ✅ Access Large Language Models (LLMs) for content generation, intent classification, and response optimization. ✅ Integrate AI-driven web crawling to fetch and process external knowledge.

    Example Use Cases:

    • Using LLM nodes to summarize financial contract documents.

    • Implementing RAG nodes to enhance AI responses with proprietary business knowledge.

    • Classifying user queries with an Intent Classification Node.

    5. Standard Nodes (Workflow Orchestration & Control)

    This category contains miscellaneous utility nodes that help in workflow orchestration, modularization, and process control. These nodes ensure the smooth execution of workflows by handling logical conditions, loops, and modular workflow structures.

    ✅ Implement conditional branching, loops, and state transitions. ✅ Build reusable workflow modules for better maintainability. ✅ Control the execution sequence within complex workflows.

    Example Use Cases:

    • Routing users through different processes based on document type.

    • Modularizing repetitive workflows like data extraction and validation.

    Key Takeaways for Developers

    ✅ Node is fundamental building block to help developers implement business logic in declarative manner. ✅ Each node category has a specific purpose—data handling, integration, user interaction, AI processing, or workflow control. ✅ AI Foundational Nodes enable advanced AI-driven automation and GenAI capabilities. ✅ Integration Nodes connect workflows with external systems for real-time data exchange. ✅ User Interaction Nodes improve engagement by presenting structured outputs and collecting inputs. ✅ Standard Nodes help with logic control, modularization, and process optimization.

    By understanding Nodes in UPTIQ, developers can design highly flexible, efficient, and AI-powered workflows that automate complex business processes with minimal effort.

    Note for Developers:

    To get a comprehensive understanding of all available nodes and their usage in workflows, developers can reach out to [email protected] for a live session with our experts. This session will provide hands-on guidance on how to effectively use different node categories within UPTIQ Workbench.

    Additionally, detailed documentation for all nodes is coming soon! Stay tuned for updates to explore in-depth configuration, use cases, and best practices for each node type. 🚀

    Key concepts

    Welcome to the exciting world of Generative AI! In this section, we’ll explore some cool and essential ideas for developers itching to create amazing AI agents. We’ll dive into topics like understanding user intents, making inferences, leveraging large language models, and automating AI workflows.

    Get ready to tackle tough challenges and discover awesome possibilities to build smart, scalable AI solutions!

    Key Features
    • Extensive Integration Support: UPTIQ's Data Gateway integrates with over 170 pre-integrated applications across 14 data categories, including accounting, banking, CRM, ERP, e-commerce, payroll, marketing, and POS systems. This extensive integration allows AI agents to access diverse data sources, providing a comprehensive foundation for data-driven decision-making.

      Uptiq

    • Real-Time Data Access: By connecting to these platforms, AI agents can retrieve up-to-date information, ensuring that analyses and responses are based on the latest data available.

    • Customizable Embedded Widgets: The Data Gateway offers customizable embedded widgets that can be integrated into applications, providing actionable insights for business owners and relationship managers. These widgets are pre-integrated with leading digital banking providers, enhancing user experience and engagement.

    Seamless API Integration in Workflows

    Once a third-party service is configured—whether using a developer’s own API key or an UPTIQ-provided key (available for limited services)—developers can leverage the API Node within UPTIQ’s workflow builder to make direct calls to these services.

    This integration enables: ✅ Real-time data retrieval from connected services within AI-driven workflows. ✅ Automated data processing by dynamically fetching relevant financial, business, or user-specific information. ✅ Custom workflow logic where AI agents can intelligently interact with external data without requiring manual intervention.

    Key Takeaways for Developers

    ✅ Effortless Third-Party Integration – Connect AI agents to a vast network of services using simple authentication configurations, either with personal API keys or UPTIQ-provided keys for select services.

    ✅ API Node for Direct Access – Utilize the API Node in UPTIQ’s workflow builder to seamlessly call external services, retrieve data, and integrate it into AI-driven decision-making processes.

    ✅ Raw & Normalized Data APIs – Choose between raw data (directly from the source) or normalized data (standardized for consistency), enabling more streamlined data processing across multiple platforms.

    ✅ Secure & Scalable Authentication – Leverage OAuth and API key authentication with built-in security mechanisms to ensure protected and compliant data access.

    ✅ Real-Time & Automated Workflows – Automate AI workflows by dynamically fetching financial, business, or customer-specific data at the right time, improving efficiency and response accuracy.

    ✅ Embedded Insights for Business Apps – Use customizable data widgets to enrich user interfaces with valuable financial insights, improving the overall end-user experience.

    By using UPTIQ’s Data Gateway, developers can quickly integrate, scale, and automate AI agents with real-world data, eliminating the complexity of manual data handling and making AI more actionable.

    Loop

    Overview

    The Loop Node in UPTIQ Workbench allows workflows to repeatedly execute a process until a specified exit condition is met. This is particularly useful for iterating over lists, verifying input validity, or ensuring a task completes before proceeding.

    By using the Loop Node, developers can: ✅ Handle Iterative Processing – Process lists of items dynamically. ✅ Validate User Input Repeatedly – Ensure correct data entry before proceeding. ✅ Create Conditional Workflows – Execute repeated steps based on runtime conditions.

    Unlike static workflows, the Loop Node dynamically determines whether to continue execution or exit based on real-time data evaluation.

    Configurations

    Field
    Description

    Execution Flow:

    1️⃣ The Loop Node receives input from previous steps. 2️⃣ It evaluates the exit condition based on the provided data. 3️⃣ If the condition is met, the node outputs {"action": "exit"} and the loop terminates. 4️⃣ If the condition is NOT met, it outputs {"action": "continue"}, repeating the process.

    Output Format

    Action
    Description

    Example Use-Cases

    Use-Case 1: Validating User Email Input

    A workflow requires users to enter a valid email before proceeding. The Loop Node repeats the request until a valid email format is entered.

    Configuration:

    Field
    Value

    Execution Process:

    • If the user enters an invalid email, the node outputs:

      🔹 The workflow asks the user to re-enter their email.

    • Once the user enters a valid email, the node outputs:

      🔹 The workflow proceeds to the next step.


    Use-Case 2: Iterating Through a List of Questions

    A chatbot is programmed to ask multiple questions, and the workflow loops until all questions are answered.

    Configuration:

    Field
    Value

    Input to Loop Node:

    Execution Process:

    1️⃣ First Iteration → The bot asks Q1, removing it from the list.

    2️⃣ Second Iteration → The bot asks Q2, removing it from the list.

    3️⃣ Third Iteration → The bot asks Q3, removing it from the list.

    🔹 Once no questions remain, the workflow exits the loop.


    Use-Case 3: Retry Logic for API Requests

    A workflow fetches data from an external API. If the API fails, the Loop Node retries the request until a successful response is received or retries are exhausted.

    Configuration:

    Field
    Value

    Execution Process:

    1️⃣ If the API fails, the node outputs:

    🔹 The API call is retried.

    2️⃣ If the API succeeds or retries reach 3, the node outputs:

    🔹 The workflow proceeds to handle the response or logs an error.


    Key Takeaways for Developers

    ✅ Automates Loop Execution – Runs a process repeatedly until a condition is met.

    ✅ Uses Natural Language for Exit Conditions – Unlike traditional programming logic, the Loop Node allows developers to define exit conditions in plain language, making it easy to configure and readable within workflows.

    ✅ Reduces Redundant Workflow Steps – Instead of creating multiple nodes for repetitive tasks, use a Loop Node for dynamic iteration.

    ✅ Enables Smart Decision-Making – The exit condition is evaluated dynamically, ensuring real-time logic execution.

    ✅ Supports Multiple Use Cases – Ideal for validations, API retries, question sequences, and dynamic task execution.

    By integrating the Loop Node, developers can create intelligent, adaptable workflows that respond dynamically to user input, process execution results, and iterative data operations. 🚀

    Knowledge

    What is Knowledge in Agent?

    In the context of AI agents on UPTIQ, Knowledge refers to the repository of structured and unstructured information that the agent relies on to better understand user queries and provide accurate responses. This knowledge is typically stored in a RAG (Retrieval-Augmented Generation) container, which combines traditional information retrieval techniques with generative AI to enhance the agent's reasoning and intent classification capabilities.

    • RAG Container: A hybrid system where relevant data is retrieved from the knowledge base and passed to the reasoning engine for context-aware responses.

    • Role in Reasoning: Knowledge is not just static data but a dynamic resource that informs the Reasoning Engine when interpreting or classifying user intents. It allows the agent to understand nuanced queries, connect them with relevant information, and generate meaningful responses.

    How Does it Work?

    1. Data Storage:

      • Knowledge is preloaded into the RAG container.

      • It may consist of documents, FAQs, databases, product manuals, or any domain-specific resources.

      • Information can be structured (e.g., tabular data) or unstructured (e.g., natural language text).

    Examples

    1. Customer Support Agent:

      • Knowledge: A collection of FAQs, product manuals, and troubleshooting guides.

      • Functionality: When a customer asks about a specific product feature, the RAG container retrieves the relevant section of the manual, which the reasoning engine uses to generate a response.

    Key takeaway for developers

    In UPTIQ, Knowledge serves as the foundation for intelligent reasoning in AI agents. It equips agents with the ability to:

    ✅ Contextualize user queries.

    ✅ Refine intent classification.

    ✅ Provide factually correct and relevant responses.

    Developers should focus on curating high-quality, domain-specific knowledge bases to maximize the accuracy and utility of their AI agents.

    Web Crawler

    Overview

    The Web Crawler Node in UPTIQ Workbench is designed to extract relevant information from web pages in real-time. Unlike traditional web scrapers, this node is optimized for AI-driven workflows, where extracted content can be processed by Large Language Models (LLMs) to generate structured insights.

    This node is particularly useful for retrieving dynamic, publicly available information, such as company overviews, industry trends, or competitor insights. The extracted data can be refined, summarized, and structured to fit business needs, making it a valuable component for AI-driven research and automation.

    Configurations

    URL (Required)

    • The fully-qualified web address from which data should be retrieved.

    • Example: https://www.uptiq.ai/about

    Instructions (Required)

    • Defines how the extracted web content should be processed.

    • Instructs AI agents on what aspects of the data to analyze and summarize.

    • Example:

    Output Format

    • The Web Crawler Node outputs structured data in JSON format.

    • Example output:

    Example Use-Case

    1. Summarizing a Company's Information

    Scenario: A user requests an overview of a company. The Web Crawler Node scrapes the company's "About" page and passes the extracted content to an LLM node, which generates a concise, structured summary.

    Workflow Nodes Used in this Use-case

    1. Web Crawler - to scrape and summurize the information

    2. Display - to display the information to user.

    Configurations:

    Field
    Value

    Workflow Steps:

    1. Web Crawler Node scrapes https://www.uptiq.ai/about to extract relevant content.

    2. LLM Node processes the extracted content and generates a structured summary.

    3. Display Node presents the final output to the user.

    Final Output:

    Key Takeaways for Developers

    ✅ Real-Time Data Extraction – The Web Crawler Node retrieves fresh, publicly available content from websites for AI processing.

    ✅ Structured AI-Driven Summarization – Extracted content is refined using LLMs, ensuring concise and contextually relevant outputs.

    ✅ Customizable Processing Instructions – Developers can tailor how extracted data is interpreted and structured by modifying the instructions field.

    ✅ JSON-Formatted Output – Ensures compatibility with other workflow components for seamless data handling.

    By integrating the Web Crawler Node into workflows, developers can automate web-based data retrieval and AI-powered summarization, significantly enhancing information accessibility and decision-making. 🚀

    Upload Document

    The Upload Document Node in UPTIQ Workbench provides a seamless way to ingest documents into a workflow, enabling users to extract information, process files, and interact with document content dynamically.

    This node supports multiple upload methods, allowing developers to: ✅ Accept file uploads directly from users. ✅ Retrieve documents via signed URLs from external systems. ✅ Process base64-encoded documents for advanced automation.

    Once a document is uploaded, the node returns a documentId, which serves as a reference for further processing—such as: 🔹 Extracting text and data from documents. 🔹 Converting documents into images or zipped archives. 🔹 Passing the document to the Prompt Node for AI-driven queries. 🔹 Uploading the file to an external storage system via pre-signed URLs.

    The Upload Document Node is a core component in document-driven workflows, making it easy to process files dynamically and integrate them with

    Table Write

    Overview

    The Table Write Node in UPTIQ AI Workbench allows developers to store, update, and manage structured data within an agent’s persistent storage layer. Unlike traditional databases, UPTIQ’s Table concept provides a simplified yet effective way to maintain structured data that remains accessible across workflows.

    This node is crucial for workflows requiring data persistence, such as tracking transactions, maintaining user records, logging workflow actions, and managing application statuses.

    Query Interpretation:

    • When a user query is received, the Reasoning Engine first attempts to classify the intent.

    • To refine the classification or respond accurately, it retrieves contextually relevant information from the RAG container.

    • This ensures that responses are both intent-driven and knowledge-informed.

  • Retrieval-Augmented Generation:

    • Relevant data is fetched from the knowledge base using sophisticated search algorithms, embeddings, or semantic similarity techniques.

    • The retrieved data is used to inform the generative AI model, ensuring responses are precise and fact-based.

  • Real-Time Execution:

    • Knowledge is dynamically accessed during the workflow execution to enrich outputs, adapt responses, or resolve ambiguities in user queries.

    • For example, a workflow node might explicitly call for retrieving data from the RAG container as part of its process.

  • Loan Origination AI Agent
    :
    • Knowledge: Bank policies, loan application criteria, and documentation templates.

    • Functionality: If a user asks about the eligibility criteria for a loan, the agent retrieves the relevant policy details and provides a tailored explanation.

  • Healthcare Assistant:

    • Knowledge: Medical guidelines, patient records, and drug interaction databases.

    • Functionality: When a user inquires about a possible drug interaction, the agent pulls information from its knowledge repository and advises accordingly.

  • Document QA Workflow

    • Knowledge: A repository of documents (e.g., PDFs) stored in the RAG container.

    • Functionality: A query like "What is the interest rate on this loan?" would trigger the workflow to:

      • Fetch the document using the external database node.

      • Extract relevant sections using the reasoning engine.

      • Deliver an accurate response informed by the document's content.

  • Uptiq

    Generative AI Models

    Generative AI Models

    Generative AI Models

    A Generative AI Model is an advanced artificial intelligence (AI) model designed to process and generate human-like text based on vast amounts of data. It is trained using deep learning techniques, particularly transformer architectures (like GPT, BERT, or LLaMA), and can understand, predict, and generate language in a way that mimics human communication.

    What It Is?

    Generative AI models are algorithms that generate new content based on patterns they have learned from data. Instead of just analyzing or classifying data, they create text, images, music, or even code.

    How Does It Work?

    Think of it like a student learning to write essays by reading thousands of articles. Over time, the student can write original essays that sound natural. Generative AI does the same but much faster.

    Examples:

    • GPT-4 (by OpenAI): Writes text, answers questions, helps with coding.

    • DALL·E 3: Creates images from text descriptions.

    • Stable Diffusion: Generates art and graphics based on inputs.

    Where It’s Used?

    • Writing emails, blogs, and reports.

    • Generating financial summaries.

    • Creating marketing images and designs.

    Exit Condition

    A condition (in natural language) that, when met, will cause the loop to exit. If not met, the loop continues.

    { "action": "continue" }

    Loop continues.

    { "action": "exit" }

    Loop stops execution.

    Exit Condition

    userInput should be a valid email

    Exit Condition

    "If we don’t have any questions left to ask"

    Exit Condition

    "API response is successful or retry count exceeds 3"

    URL

    https://www.uptiq.ai/about

    Instructions

    You are an AI assistant tasked with summarizing company information from extracted web content. Analyze the provided data and produce a concise summary in JSON format. Each key in the JSON should represent one aspect of the company, and the corresponding value should be a brief summary of that aspect. Focus on critical details like the company's mission, services, achievements, and any other notable points.

    AI-powered document intelligence
    .

    Configurations

    Field
    Description

    Upload Method

    Defines how the document will be uploaded. Options: User Input, Content, SignedURL.

    Supported File Types (For User Input)

    Specifies allowed file types (e.g., [PDF, DOCX, JPG]).

    Content (For Content Upload)

    Accepts a base64-encoded string representing the document.

    Signed URL (For SignedURL Upload)

    A temporary URL that enables document upload from an external system.

    Upload to External System (Optional)

    Allows uploading the document to a pre-configured external storage system.

    Upload Methods Explained

    1️⃣ User Input → Users upload a file directly.

    • Example: A user submits a loan application PDF for processing.

    • Requires specifying Supported File Types (e.g., [PDF, DOCX]).

    2️⃣ Content → Uploads a document using a base64-encoded string.

    • Example: An automated workflow sends document content for AI processing.

    • Requires providing the base64 document string in the Content field.

    3️⃣ SignedURL → Retrieves documents from an external storage system using a pre-signed URL.

    • Example: A workflow pulls an invoice from a cloud-based document storage system.

    • Requires specifying a Signed URL that grants temporary access for secure file transfer.

    Output Format

    After successful execution, the node returns:

    • documentId → Unique identifier for the uploaded document.

    • key → A reference path that can be used for further processing.

    Example Use-Cases

    1. Loan Application Processing

    A loan processing workflow requires users to upload financial documents for verification.

    • Configuration:

      • Upload Method: User Input

      • Supported File Types: [PDF, DOCX]

    • Outcome:

      • The user uploads their loan application document.

      • The documentId is returned for further processing (e.g., OCR extraction, AI-based validation).


    2. AI-Powered Document Summarization

    A workflow needs to extract and summarize the content of a legal contract using AI.

    • Configuration:

      • Upload Method: Content

      • Content: Base64-encoded document string

    • Outcome:

      • The document is uploaded without manual user input.

      • The documentId is passed to the Prompt Node, where an AI model generates a contract summary.


    3. Retrieving Invoices from Cloud Storage

    A finance workflow needs to fetch invoices stored in an external system and process them.

    • Configuration:

      • Upload Method: SignedURL

      • Signed URL: "https://cloudstorage.com/get-invoice?token=abc123"

    • Outcome:

      • The invoice is retrieved via the signed URL, enabling automated processing without manual uploads.

    Key Takeaways for Developers

    ✅ Flexible Upload Options – Supports direct user uploads, base64 content processing, and external storage retrieval.

    ✅ Seamless AI Integration – Uploaded documents can be passed to AI models for text extraction, summarization, and querying or for any other use in the workflows.

    ✅ Optimized for Document Processing – Returns a documentId, which can be used in subsequent workflow steps for conversion, compression, or data extraction. Nodes that can accept this documentId can directly fetch the document content without developers having to send complete document.

    ✅ Secure & Scalable – Supports pre-signed URLs for secure external storage access, enabling workflows to fetch documents without exposing sensitive credentials.

    By leveraging the Upload Document Node, developers can streamline document-driven automation and enhance AI workflows by integrating real-time document ingestion into business processes. 🚀

    { "action": "continue" }
    { "action": "exit" }
    { "questions": ["Q1", "Q2", "Q3"] }
    { "action": "continue" }
    { "action": "continue" }
    { "action": "exit" }
    { "action": "continue" }
    { "action": "exit" }
    {
      "Mission": "To empower businesses with AI-driven solutions for improved decision-making.",
      "Services": "Offers AI workbench for building and deploying intelligent agents.",
      "Achievements": "Recognized as a leader in low-code AI development platforms."
    }
    {
      "Mission": "To empower businesses with AI-driven solutions for improved decision-making.",
      "Services": "Offers AI workbench for building and deploying intelligent agents.",
      "Achievements": "Recognized as a leader in low-code AI development platforms."
    }
    {
      "documentId": "a0abf1d4-a4ca-459e-aada-b10947481b9c",
      "key": "/executions/192012/example"
    }

    Refer to the Table section for guidance on creating tables in Uptiq before utilizing it with the Table Write node.

    Configurations

    Field
    Description

    Table

    Select the table where the operation will be performed, e.g., Transactions.

    Operation

    Choose the type of database action: Insert Many, Update, or Delete.

    Filter (For Update & Delete)

    Define a JSON filter to identify which records need modification or removal.

    Data (For Insert Many & Update)

    Provide the new or updated data in JSON format.

    Operations & How They Work

    1. Insert Many (Bulk Insert)

      • Adds multiple records at once to the selected table.

      • Example Data:

    2. Update (Modify Existing Records)

      • Updates specific records that match a defined filter.

      • Example Filter:

      • Example Data (New Values for Matching Records):

    3. Delete (Remove Records)

      • Deletes records based on a filter condition.

      • Example Filter:

    Response Format

    After execution, the node provides a structured response confirming the operation results, such as:

    This allows subsequent workflow nodes to act on the results dynamically.

    Example Use-Cases

    Use-Case 1: Tracking Loan Application Status

    A financial institution’s loan processing workflow needs to update loan statuses after review.

    • Configuration:

      • Table: LoanApplications

      • Operation: Update

      • Filter: { "status": "pending review" }

      • Data: { "status": "approved" }

    • Expected Outcome:

      • All pending review applications will be marked as approved.

      • The response will indicate the number of records updated.


    Use-Case 2: Recording Transaction Logs

    A workflow captures payment transactions and needs to persist them in the database for future reference.

    • Configuration:

      • Table: Transactions

      • Operation: Insert Many

      • Data:

    • Expected Outcome:

      • New transactions are stored in the table, ensuring future workflows can access them.


    Use-Case 3: Cleaning Up Rejected Applications

    A workflow runs periodically to delete rejected loan applications that are older than 30 days.

    • Configuration:

      • Table: LoanApplications

      • Operation: Delete

      • Filter: { "status": "rejected" }

    • Expected Outcome:

      • All records with status: rejected are removed, optimizing storage.

    Key Takeaways for Developers

    ✅ Enables Persistent Data Storage – Maintain structured data across workflows without relying on an external database.

    ✅ Supports Bulk Inserts & Updates – Efficiently write multiple records in one operation, improving workflow performance.

    ✅ Works with Conditional Filters – Modify or delete records based on dynamic conditions.

    ✅ Ideal for Transaction Logs, Application Tracking, and Record Management – Best suited for workflows that require data persistence and structured storage.

    By leveraging the Table Write Node, developers can build workflows with structured, persistent data handling, ensuring that business processes retain historical data, manage transactions, and optimize workflow efficiency. 🚀

    Javascript

    Overview

    The JavaScript Node in UPTIQ Workbench enables developers to execute custom JavaScript code within a workflow. While built-in workflow nodes handle many automation tasks, there are scenarios where custom logic, data transformation, or conditional operations are required. The JavaScript Node provides the flexibility to manipulate, filter, or format data dynamically before passing it to the next step in the workflow.

    With this node, developers can: ✅ Perform data transformation by modifying, formatting, or restructuring JSON objects. ✅ Execute mathematical computations such as tax calculations or discount applications. ✅ Implement conditional logic to alter workflow paths based on input values. ✅ Merge, clean, or reformat API responses before passing them to the next node. ✅ Work with context variables, agent-level data, and secret variables for secure and dynamic processing.

    Configurations

    Field
    Description

    Execution Flow

    1️⃣ The JavaScript Node receives input from previous workflow steps. 2️⃣ It executes custom JavaScript logic, transforming or processing the input. 3️⃣ The output of the script is passed to the next node in the workflow.

    Example Syntax for Variable Usage

    Example Use-Cases

    Use-Case 1: Formatting API Response Data

    A workflow fetches user details from an external API, but the response includes unnecessary fields. The JavaScript Node is used to extract and reformat the required fields while also deriving a new field (location).

    API Response (From Previous Node)

    JavaScript Node Code Snippet

    Output Passed to Next Node

    🔹 How It Helps: ✔ Removes unnecessary fields. ✔ Combines multiple fields into a structured response. ✔ Prepares the data for the next workflow step.


    Use-Case 2: Applying Conditional Business Logic

    A loan eligibility check requires that if a user's credit score is below 650, a flag should be set for manual review.

    JavaScript Node Code Snippet

    Output Passed to Next Node

    🔹 How It Helps: ✔ Automates eligibility checks without requiring a separate ruleset. ✔ Streamlines manual review processes based on conditions.


    Use-Case 3: Calculating Discounts Based on Order Total

    A workflow calculates a discount percentage based on an order’s total amount.

    JavaScript Node Code Snippet

    Output Passed to Next Node

    🔹 How It Helps: ✔ Implements dynamic business logic for discount calculation. ✔ Reduces dependency on external services for simple calculations.

    Best Practices & Key Takeaways for Developers

    ✅ Always Return an Output – Ensure the script returns a valid JavaScript object or primitive value, as this is passed to the next node.

    ✅ Validate Inputs – Use checks to avoid undefined values or workflow failures.

    ✅ Use Secret Variables Securely – When working with API keys or sensitive data, store them in secret.<var_name> instead of hardcoding values.

    ✅ Optimize Performance – Keep scripts lightweight to avoid workflow execution delays.

    ✅ Error Handling is Essential – Use try...catch to gracefully handle failures within the script.

    Example Error Handling Pattern

    By integrating the JavaScript Node, developers can extend workflow functionality beyond built-in nodes, allowing for custom logic execution, data transformation, and dynamic workflow adaptability. 🚀

    Rulesets

    Overview

    Rulesets in UPTIQ enable developers to define business rules that can be executed within agentic workflows to automate decision-making. These rules help streamline processes like loan origination, compliance validation, and data filtering based on predefined conditions.

    For example, in a loan origination workflow, a ruleset can be used to automatically filter out loan applications where:

    • The loan amount is less than $1000

    • The borrower’s age is under 18

    By defining these conditions in a Ruleset, the AI agent can evaluate applications instantly and proceed only with the ones that meet the eligibility criteria.

    Creating a Ruleset in UPTIQ

    1. Navigate to Config & Utils → Click "Create Ruleset"

    2. Enter a Name for the Ruleset and Save.

    3. Define Facts (Input Variables):

      • Click on the created Ruleset and select "Create Fact."

    Using a Ruleset in Workflows

    Once a Ruleset is created, it can be integrated into a workflow to dynamically evaluate conditions and automate decision-making. Let’s go step by step with an example.

    Example Workflow Scenario

    Suppose the previous node in the workflow is a JS Node that processes loan application data and produces the following JSON output:

    In UPTIQ workflows, each node’s output serves as the input for the next node. This means the Ruleset Node will receive the above JSON data as input.

    Mapping Runtime Values to Ruleset Facts

    To enable the Ruleset to evaluate conditions dynamically, we map input variables from the previous node’s output to the corresponding Facts defined in the Ruleset.

    1. Drag a "Ruleset Node" into the workflow

      • Position it after the JS Node that generates the loan application data.

    2. Select the Created Ruleset

    Execution & Decision Evaluation

    • The Ruleset Node evaluates the conditions:

      • ✅ Loan Amount (1500) is greater than $1000 → Rule Passes

      • ✅ Borrower Age (25) is greater than 18 → Rule Passes

    • If all required conditions are met, the output variables

    What Happens Next?

    • If both rules pass, the workflow proceeds to the next step, such as approval, document processing, or further validations.

    • If one or more rules fail, the output variable for that rule will not be included, allowing developers to implement alternative paths, such as rejection or additional review.

    Key Takeaways for Developers

    ✅ Automate Business Logic – Use Rulesets to define and execute structured decision-making processes without manual intervention. ✅ Flexible & Scalable – Define multiple rules within a Ruleset to support complex decision-making in AI workflows. ✅ Real-Time Rule Execution – Pass dynamic runtime values to evaluate conditions in real time. ✅ Seamless Workflow Integration – Easily integrate Ruleset nodes in workflows to automate approvals, filter data, or trigger actions based on rule outcomes.

    By leveraging Rulesets, developers can create smarter AI agents that make automated, context-aware decisions, improving efficiency and accuracy across various AI-driven processes.

    Document To Image

    Overview

    The Document to Image Node abstracts the process of converting any document (PDF, DOCX, Excel, etc.) into images, making it easier to pass structured data to LLMs for extraction.

    This node is critical in workflows where: ✅ Documents need to be summarized before being processed. ✅ Text extraction accuracy needs improvement (reducing formatting errors). ✅ OCR and AI-driven tools require images for better text recognition. ✅ Structured data from Excel sheets needs to be extracted accurately (e.g., financial tables).

    Workflow

    What is a workflow ?

    Understanding UPTIQ AI Workbench Workflows

    Workflows in UPTIQ AI Workbench represent low-code to no-code solutions designed to help developers implement business logic for specific AI agents. These workflows act as the

    Storage

    Overview

    The Storage feature in UPTIQ Workbench functions similarly to Tables but is specifically designed for document storage. While Tables handle structured data storage, Storage allows AI agents to store, retrieve, and manage documents across workflows. This ensures that AI-driven processes have seamless access to necessary files for decision-making, document processing, and data extraction.

    With Storage, developers can: ✅ Store various document types (e.g., PDFs, images, scanned forms) that AI agents process. ✅ Retrieve stored documents dynamically within workflows. ✅ Enable AI models to reference documents

    Developer Support

    Developer Support Information

    For any development-related queries, assistance, data gateway integrations, or troubleshooting while working with the Uptiq AI Workbench, developers can reach out to our support team at . Whether you need help with agent configuration, integrations, workflows, or debugging, our team is available to provide guidance and resolve issues efficiently.

    Feel free to contact us for:

    • Technical support on building and deploying AI agents.

    RAG (Retrieval Augmented Generation)

    RAG, or Data Retrieval & Augmentation, is like giving your AI model a superpower to find and use extra information when it needs it.

    Why is it needed?

    AI models are only as good as the data they're trained on. Sometimes, that data might not be enough to answer a question or complete a task accurately. RAG solves this problem by letting the AI model access and use additional, relevant information from external sources.

    Agent

    Get started with an UPTIQ AI agent. Your own AI Assistant.

    While LLMs are powerful, their capabilities are bound by their pre-existing knowledge. With UPTIQ AI Agent, you can go beyond these limitations by integrating customized, relevant workflows to supercharge your AI agent. Whether it’s leveraging prebuilt workflows or designing your own, the Workbench empowers you and your team to create AI solutions tailored to your needs. Your AI assistant is now smarter, faster, and more aligned with your goals.

    What does it do?

    The agent is the main point of contact for users, handling their questions and getting things done. They're the decision-maker, figuring out what needs to be done to meet user requests and the best way to do it. This key role makes the agent the center of user interaction and task completion, highlighting their importance in ensuring smooth user experiences and efficient results.

    Intent Classification

    What is Intent Classification?

    Intent classification is the process of identifying the purpose or goal behind a user’s input in an AI-driven application. It enables AI agents to determine what the user wants and route the request to the correct workflow or response.

    Importance of Intent Classification

    • Helps AI applications understand user queries accurately.

    • Routes the user to the correct process or action.

    AI Agents

    As a developer, why should you know about AI Agents?

    Understanding AI agents is crucial for developers as they are central to building intelligent and autonomous systems. By mastering the design, development, and deployment of AI agents, developers can unlock a new era of innovation, streamlining complex processes and enhancing user interactions. These agents can be programmed to learn and adapt, making them invaluable for tasks that require decision-making and problem-solving skills. Furthermore, AI agents can operate autonomously, reducing the need for human intervention and increasing efficiency.

    Developers proficient in AI agent frameworks and tools can accelerate development cycles and remain at the forefront of the rapidly evolving AI landscape. These frameworks provide a foundation for building intelligent agents, while tools facilitate tasks such as data collection, model training, and agent deployment. By leveraging these resources, developers can create sophisticated AI agents capable of tackling a wide range of challenges.

    Moreover, AI agents can be integrated into various applications, from customer service chatbots to autonomous vehicles. This versatility makes them an essential tool for developers across industries. As AI technology continues to advance, we can expect even more innovative applications of AI agents, further solidifying their importance in the field of software development.

    Introduction

    This page provides an overview of essential Generative AI concepts, such as intent classification, inference, and AI workflow automation, crucial for understanding and building intelligent AI agents.

    What is Generative AI?

    Generative AI refers to artificial intelligence systems that can create new content, such as text, images, music, and even code, by learning from patterns in data. Unlike traditional AI models that classify or predict based on existing data, generative AI can generate novel outputs.

    jsonCopyEdit[
      { "transactionId": "T123", "status": "completed", "amount": 500 },
      { "transactionId": "T124", "status": "completed", "amount": 1000 }
    ]
    jsonCopyEdit{
      "inserted": 2,
      "updated": 1,
      "deleted": 3
    }
    backbone of AI agents
    , enabling them to process user queries and deliver desired responses or actions. Each workflow is essentially a sequence of interconnected
    nodes
    , with each node representing a specific action, capability, or logic. To effectively create workflows, developers must understand the capabilities of these nodes, as they are the building blocks for crafting the behavior of AI agents.

    Key Features of Workflows:

    1. Visual Interface: Workflows are created using a visual interface, as seen in the image above. Developers can drag and drop nodes to design the logic without requiring extensive coding knowledge.

    2. Node-Centric Structure: Each node in a workflow represents a predefined capability, such as fetching data, transforming documents, or integrating external APIs. Developers need to learn these node capabilities, similar to learning functions in a programming language like Python.

    3. Execution Logic: The workflow begins when the reasoning engine interprets a user query and identifies the relevant intent. It then executes the corresponding workflow to process the query and return results.

    4. Pre-Built Capabilities: The system includes pre-built nodes for various actions, such as:

      • Data Operations: Fetching data from external databases, reading or writing to tables, filtering data, and querying graph databases.

      • Integrations: API calls, webhooks, CRM integrations, and notifications.

      • AI-Specific Operations: OCR processing, document conversion, and invoking large language models (LLMs) for text-based responses.

    Analogy for Developers:

    Think of workflows as programs or functions, and nodes as the syntax or commands you use to write them. Just as developers must learn Python syntax to write effective Python code, developers working with UPTIQ AI Workbench must understand the functionality and configuration of each node to build workflows efficiently. Mastering these nodes allows for the creation of sophisticated and tailored AI agent behaviors.

    Example Workflow (Based on Image):

    In the workflow depicted in the image:

    1. External Database Node: Fetches data from an external source.

    2. Data Processing Nodes:

      • Fetch Document: Retrieves a specific document.

      • Document to Image: Converts a document into an image format if it’s a PDF.

      • Pass Through: Directly passes the data if it's not a PDF.

    3. AI Logic Nodes:

      • Prompt: Sends a query to the LLM for generating intelligent responses.

      • Display: Presents the results to the end user.

    4. Custom JavaScript Nodes:

      • Adds flexibility by allowing developers to execute custom logic when needed.

    Key takeaway for developers:

    To leverage the full potential of UPTIQ AI Workbench workflows, developers should:

    ✅ Explore and understand the purpose and configuration of each node.

    ✅ Experiment with different workflows to see how nodes interact.

    ✅ Treat workflows as modular, reusable components of an AI agent's behavior.

    By mastering workflows, developers can create powerful, efficient, and intelligent AI agents to meet specific business needs with minimal coding effort.

    How to create a workflow ?

    for information extraction and analysis.

    How Storage Works in AI Workflows

    To facilitate document handling, UPTIQ provides specialized workflow nodes for reading and writing storage data.

    Storage Write Node → Saves documents into a configured storage location. Storage Read Node → Retrieves documents using a unique Storage ID, allowing workflows to access required files dynamically.

    📌 For detailed node configuration, refer to: ➡ Storage Read Node ➡ Storage Write Node

    Key Features & Benefits of Storage

    ✅ Seamless Document Management – Store AI-processed files and retrieve them across workflows as needed.

    ✅ Workflow Automation – Enable AI agents to dynamically fetch relevant documents for review, processing, or analysis.

    ✅ Persistent File Access – Unlike transient data passed between workflow nodes, documents stored via Storage remain accessible for future use.

    ✅ Optimized for AI Document Processing – AI agents can extract, summarize, or validate information directly from stored files, making workflows more intelligent.

    ✅ Flexible & Scalable – Designed to support various file formats, making it ideal for applications such as loan origination, legal document processing, financial analysis, and compliance verification.

    Best Practices for Using Storage in Workflows

    Organize Storage Efficiently – Use structured storage locations for different file types (e.g., applications, invoices, contracts). Utilize Storage IDs for Retrieval – Always reference documents using Storage IDs to ensure precise access. Optimize File Handling – Store only necessary documents to manage storage efficiently and reduce retrieval times. Ensure Compliance & Security – Implement proper access controls to prevent unauthorized document access.

    Key Takeaways for Developers

    ✅ Use Storage for Persistent Document Handling – AI workflows can dynamically fetch and process documents stored in Storage. ✅ Leverage Read/Write Nodes for Automation – Automate document storage and retrieval within agent workflows. ✅ Integrate with AI-Driven Workflows – Enable AI agents to extract insights from stored documents, improving automation and decision-making.

    By integrating Storage into workflows, developers can enhance AI-powered document management, making AI agents more efficient and capable of processing real-world business documents.

    Problems without RAG and How RAG Solves Them
    1. Limited Knowledge

    • Without RAG: Ambiguous queries can be challenging for AI models. If a question has multiple possible interpretations, the model might not know which one to choose.

    • With RAG: The AI model can use external knowledge to disambiguate the query. For instance, it could search for information about the different meanings of a word to determine the most likely interpretation in the given context.

    1. Handling Ambiguity

    • Without RAG: Some queries require contextual understanding beyond the immediate text. An AI model might struggle to answer a question that relies on cultural references or domain-specific knowledge it lacks.

    • With RAG: The AI model can leverage external sources to gain the necessary context. For example, it could search for information about a cultural reference to understand a nuanced question.

    1. Contextual Understanding

    • Without RAG: AI models trained on static datasets become outdated as the world changes. Information that was accurate at the time of training may no longer be valid. For instance, an AI model trained on product prices from a year ago might give incorrect information due to price fluctuations.

    • With RAG: The AI model can retrieve up-to-date product prices from the web, ensuring the user receives accurate information.

    1. Stale Information

    • Without RAG: AI models are confined to the knowledge they were trained on. If a user's query falls outside this scope, the model cannot provide a satisfactory answer. For example, if an AI chatbot is asked about a recent news event it wasn't trained on, it would be unable to respond accurately.

    • With RAG: The AI model can access external knowledge sources like the internet to find relevant information about the news event and generate an appropriate response.

    General Steps to Build a RAG Pipeline

    1. Select Data Sources: Identify the repositories where your AI model will access supplementary information. These sources can include internal databases, external APIs, cloud storage, or web search results. The choice depends on the specific use case and the kind of information needed to augment the model's responses.

    2. Choose a Retrieval Method: Select the strategy your AI model will use to search and retrieve relevant data from the chosen sources.

      1. Keyword Search: This method looks for exact matches of the specified keywords within the data. It's a simple and fast approach but can miss relevant information if the wording is slightly different. Example: Searching for "climate change" will only return results that contain those exact words and might miss articles about "global warming."

      2. Semantic Search: This technique goes beyond keyword matching and considers the meaning and context of words to find relevant information. It can handle synonyms, related terms, and different phrasings.

        Example: A semantic search for "climate change" might also return results about "rising sea levels," "greenhouse gas emissions," and "environmental impact."

      3. Embeddings: This approach converts text into numerical vectors (embeddings) that capture the semantic meaning of the words. These vectors can be compared to find semantically similar information, even if the wording is different. Embeddings are often used in conjunction with vector databases, which efficiently store and search for similar vectors.

        Example: An embedding for "climate change" might be close to the embeddings for "global warming," "environmental crisis," and "sustainability," allowing the model to find relevant information even if the exact keywords aren't present.

    3. Integrate the Retrieval System: Connect your AI model to the chosen data sources and implement the selected retrieval method. This step often involves using APIs or software libraries to establish communication between the model and the data repositories.

    4. Fine-Tune the Model: Optimize the AI model to effectively utilize the retrieved information. This may involve adjusting model parameters or training the model on specific data to improve its ability to generate accurate and coherent responses that incorporate the retrieved context.

    Building a RAG pipeline can be complex and time-consuming. However, UPTIQ AI Workbench simplifies this process by providing a declarative framework that allows developers to define the desired behavior of the pipeline without having to implement the underlying retrieval and integration logic. This abstraction can significantly accelerate the development and deployment of RAG-based applications.

    Checkout how you can build RAG pipeline with UPTIQ AI Workbench here

    Improves user experience by reducing friction in interactions.
  • Enhances automation by enabling AI to trigger workflows based on intent.

  • Traditional Challenges

    • Ambiguous User Inputs: Users phrase requests in different ways, making it hard to classify intent correctly.

    • Context Understanding: Simple keyword matching fails when context is required.

    • Handling Edge Cases: Uncommon or out-of-scope queries often misfire or go unclassified.

    • Scalability Issues: Rule-based intent detection struggles with large datasets and complex interactions.

    • Semantic Understanding: Semantic understanding poses a significant challenge in intent classification due to the complexity of human language. It involves interpreting the meaning behind a sentence and identifying the speaker's underlying intention.

    How AI Solves These Challenges

    • Machine Learning Models: Use NLP (Natural Language Processing) models trained on varied user inputs to classify intents accurately.

    • Context-Aware Models: Advanced AI models can understand context and infer meaning beyond direct keyword matching.

    • Continuous Learning: AI models improve over time by learning from new data and user interactions.

    • Multi-Intent Recognition: AI can detect multiple intents in a single input, leading to more dynamic responses.

    New Possibilities Enabled

    • Dynamic Workflows: AI agents can route users dynamically to different application features.

    • Conversational AI Agents: Chatbots and virtual assistants can handle complex, natural conversations.

    • Better Personalization: AI can adjust responses based on detected user intent and past interactions.

    • Automated Process Execution: AI-driven intent classification enables intelligent automation, reducing manual effort.

    What are they?

    AI Agents are autonomous entities that leverage artificial intelligence to perceive their environment, make decisions, and take actions to achieve specific goals. They can interact with their environment, learn from experiences, and adapt their behavior to optimize outcomes. Sub-agents are specialized AI agents that work under the direction of a primary agent to handle specific tasks or aspects of a larger goal.

    Why are they important?

    AI agents are crucial because they can automate complex tasks, enhance decision-making, and improve efficiency across various domains. They can handle repetitive processes, analyze vast amounts of data, and provide personalized experiences. By delegating tasks to sub-agents, AI agents can break down complex problems into manageable components and achieve goals more effectively.

    How are agents changing the way we find solutions?

    AI agents are revolutionizing problem-solving by offering intelligent and adaptive solutions. They can explore multiple possibilities, learn from feedback, and refine their strategies to find optimal outcomes. By automating information gathering, analysis, and decision-making, AI agents accelerate the solution-finding process and enable more informed and effective actions.

    Example of AI Agents

    • Customer Service Chatbots: AI-powered chatbots can handle customer inquiries, provide support, and resolve issues autonomously.

    • Personalized Recommendation Systems: AI agents can analyze user preferences and behavior to offer tailored product recommendations.

    • Autonomous Vehicles: AI agents control self-driving cars, making decisions about navigation, obstacle avoidance, and traffic management.

    • Financial Trading Bots: AI agents can execute trades, monitor market conditions, and optimize investment portfolios.

    How Does It Work?

    Generative AI learns by studying patterns in massive datasets and then tries to create something similar. Here’s a simple way to think about it:

    1. Learning from Data: The AI looks at millions of examples (e.g., books, paintings, music) and figures out the common patterns. Example: If you show it thousands of cat pictures, it learns what a "cat" looks like.

    2. Making Predictions: When given a prompt (like a sentence or an image idea), the AI predicts what should come next based on what it has learned. Example: If you ask an LLM (Large Language Model) ChatGPT to write a poem, it guesses the next best words based on how poems are usually written.

    3. Refining the Output: Advanced AI models improve their outputs over time by constantly fine-tuning their results based on feedback. Example: In the lending and loan origination space, an AI model used for credit risk assessment gets better at predicting loan defaults by continuously learning from past loan performance.

      1. Initially, the AI analyzes historical financial spreadsheets, borrower credit scores, and income statements to assess loan risk.

      2. If the model predicts that a borrower is low-risk but they later default, the system adjusts its criteria based on this new information.

      3. Over time, it becomes more accurate at detecting risky applicants and improving loan approval decisions.

    Assistance with configuring workflows, intents, and integrations.

  • Debugging and troubleshooting errors.

  • Best practices and development recommendations.

  • Our support team is here to ensure a smooth development experience.

    [email protected]
    Build Your First AI Agent

    JavaScript Snippet (Required)

    A JavaScript code snippet that will be executed. It must return a value that will be passed to the next node.

    Input Variables

    Receives data from previous workflow steps via input.<var_name>.

    Agent Variables

    Access agent-level variables via agent.<var_name>.

    Secret Variables

    Securely retrieve secret values via secret.<var_name>.

    Context Variables

    Use context.<var_name> for workflow-wide data persistence.

    Instead of directly processing a raw document, converting it into images first improves clarity for AI models, ensuring higher extraction accuracy when passed to Prompt Nodes for further analysis.

    Watch the “Build Your First Agent” video to see how this node is used in real-world document processing workflows.

    Configurations

    Field
    Description

    Document Id

    The unique identifier of the document that needs to be converted into images. This documentId is generated by the Upload Document Node.

    Execution Flow

    1️⃣ Receives a documentId as input (from an Upload Document Node). 2️⃣ Converts the document into images (one per page for PDFs/DOCX, one per sheet for Excel). 3️⃣ Returns image metadata, including URLs, page numbers, and sheet names (if applicable). 4️⃣ The resulting image URLs can be passed to an LLM for data extraction (via a Prompt Node).

    Output Format

    The node returns a structured JSON output containing image metadata linked to the original document:

    • documentId → The original document’s reference ID.

    • imageUrl → The generated image’s location (can be used for further processing).

    • pageNumber → Page index for multi-page documents (PDF/DOCX).

    • sheetName (For Excel) → Indicates which sheet the image corresponds to.


    Example Use-Cases

    Use-Case 1: Extracting Data from a Loan Agreement PDF

    A loan processing workflow needs to extract borrower details and loan terms from a PDF document. Instead of directly processing the PDF, the document is converted to images for better OCR and AI-driven text extraction.

    Configuration:

    Field
    Value

    Document Id

    fa5d0517-a479-49a5-b06e-9ed599f8e57a

    Execution Process:

    1️⃣ User uploads a PDF (Loan Agreement). 2️⃣ Document to Image Node converts each page into separate images. 3️⃣ The image URLs are passed to the Prompt Node, where an LLM extracts borrower details, interest rates, and loan conditions.

    🔹 Why use this approach? ✔ Improves OCR accuracy (eliminates PDF formatting inconsistencies). ✔ Prepares structured image data for AI-based text extraction. ✔ Works with multi-page documents seamlessly.


    Use-Case 2: Processing Financial Spreadsheets for AI Extraction

    A workflow extracts financial summaries from an Excel sheet, ensuring accurate numeric extraction (e.g., revenue, expenses, and net profit values).

    Configuration:

    Field
    Value

    Document Id

    fc9d0517-b479-49a5-b06e-8ed599a8c123

    Execution Process:

    1️⃣ User uploads an Excel file (Balance Sheet). 2️⃣ Document to Image Node converts each sheet into an image. 3️⃣ The images are processed through an LLM, extracting key financial data.

    🔹 Why use this approach? ✔ Preserves numeric formatting (avoids misinterpretation of decimal points). ✔ Prepares structured tables for AI analysis. ✔ Enhances accuracy for finance-driven workflows.


    Use-Case 3: Automating Identity Verification from Scanned Documents

    A workflow automates KYC (Know Your Customer) verification by extracting text from scanned documents.

    Configuration:

    Field
    Value

    Document Id

    ff5d0517-d179-42a5-a16e-3ed599f8e77b

    Execution Process:

    1️⃣ User uploads a scanned image of an ID (PDF format). 2️⃣ Document to Image Node extracts individual pages into images. 3️⃣ The image URLs are sent to an AI model, which verifies identity details.

    🔹 Why use this approach? ✔ Ensures compatibility with OCR-driven KYC tools. ✔ Allows for multi-step validation (Face Match, ID Verification, etc.).


    Key Takeaways for Developers

    ✅ Abstracts Document-to-Image Conversion – Developers don’t need to manually process PDFs, Excel sheets, or DOCX files. The node automates image conversion for seamless AI-based processing.

    ✅ Enhances LLM-Based Data Extraction – Converting documents to images improves AI accuracy, ensuring better text recognition and field extraction.

    ✅ Supports Multi-Format Inputs – Works with PDFs, Excel Sheets, and Scanned Documents, making it versatile across business use cases.

    ✅ Integrates with AI & OCR Processing – Images generated from documents can be passed to Prompt Nodes, enabling structured AI-driven data extraction.

    ✅ Used in Real-World AI Workflows – Watch the "Build Your First AI Agent" video to see how this node is applied for document-based automation.

    By leveraging the Document to Image Node, developers can streamline AI-powered document processing, ensuring higher accuracy, efficiency, and seamless AI integration. 🚀

    jsonCopyEdit{ "status": "pending review" }
    jsonCopyEdit{ "status": "approved" }
    jsonCopyEdit{ "status": "rejected" }
    jsonCopyEdit[
      { "transactionId": "T567", "status": "completed", "amount": 1500 },
      { "transactionId": "T568", "status": "pending", "amount": 700 }
    ]
    const userName = input.user_name; // Data from the previous node
    const apiKey = secret.apiKey; // Secure API Key from Secret Variables
    const agentType = agent.type; // Retrieve agent-level variable
    const sessionId = context.sessionId; // Workflow-wide session data
    {
      "user_id": 123,
      "name": "John Doe",
      "email": "[email protected]",
      "address": {
        "street": "123 Main St",
        "city": "New York",
        "zip": "10001"
      }
    }
    javascriptCopyEditconst main = () => {
      const { name, email, address } = input;
      const location = `${address.street}, ${address.city}, ${address.zip}`;
      return {
        fullName: name,
        emailAddress: email,
        location,
      };
    };
    main();
    jsonCopyEdit{
      "fullName": "John Doe",
      "emailAddress": "[email protected]",
      "location": "123 Main St, New York, 10001"
    }
    const main = () => {
      const { creditScore } = input;
      return {
        creditScore,
        reviewRequired: creditScore < 650 ? true : false
      };
    };
    main();
    jsonCopyEdit{
      "creditScore": 620,
      "reviewRequired": true
    }
    const main = () => {
      const { orderTotal } = input;
      const discount = orderTotal > 500 ? 0.1 : 0.05;
      return {
        orderTotal,
        discountAmount: orderTotal * discount
      };
    };
    main();
    jsonCopyEdit{
      "orderTotal": 600,
      "discountAmount": 60
    }
    javascriptCopyEditconst main = () => {
      try {
        const { amount } = input;
        if (!amount) throw new Error("Amount is required");
        
        return { amount, tax: amount * 0.1 };
      } catch (error) {
        return { error: error.message };
      }
    };
    main();
    {
      "imagesResult": [
        {
          "images": [
            {
              "key": "/executions/192012/image1",
              "documentId": "985b9706-c3e0-48b3-b6f5-2cb873004e41",
              "imageUrl": "https://storage.googleapis.com/example/image1.jpg"
            }
          ],
          "pageNumber": 1
        },
        {
          "images": [
            {
              "key": "/executions/192012/image2",
              "documentId": "901bd6ce-0ee5-48bb-b4fc-5351c7a9d925",
              "imageUrl": "https://storage.googleapis.com/example/image2.jpg"
            }
          ],
          "pageNumber": 2
        }
      ]
    }

    Facts are input variables (e.g., loan amount, borrower age) that will receive values dynamically at runtime.

  • Define Decisions (Rules):

    • Go to the Decision Tab → Click "Create Decision"

    • Define a Decision (Rule) by selecting a Fact, setting an operator, and assigning a threshold value.

    Example:

    • Rule Name: Loan Amount should be greater than $1000

    • Fact: Loan Amount

    • Operator: Greater than (>)

    • Value: 1000

    • Output: Define the variable that will be passed when this rule is met. If the rule is not satisfied, this output variable won’t be included in the Ruleset Node output.

    • Repeat the same process for borrower age (e.g., Borrower age must be 18 or above).

  • Click "Add" to create the rule and complete the Ruleset setup.

  • Choose the Ruleset that contains rules for
    Loan Amount and Borrower Age
    .
  • Configure Mappings (Variable Assignments):

    • Use the special variable $input to access the output of the previous node.

    • Assign the runtime values to the respective Facts in the Ruleset:

    Ruleset Fact
    Mapped Value

    This ensures that when the Ruleset executes, it evaluates Loan Amount = 1500 and Borrower Age = 25 against the pre-defined rules.

  • defined in the Ruleset are
    returned
    and available for the next workflow node.
    Illustration of the UPTIQ AI agent showcasing the flow from Sub-Agents to Intents and Workflows, powered by the Reasoning Engine and connected to Data for seamless AI-driven processes

    How Does It Work?

    This diagram illustrates the end-to-end flow of how the UPTIQ AI Agent processes user queries and executes workflows effectively. Let’s break it down into key steps:

    1. Receiving the User Query The journey starts with a user submitting a request to the AI Agent. Example: A user might ask, "I’d like to upload a balance sheet of a business for analysis and to have it structured into my spreadsheet template"

    The AI Agent takes this request and begins analyzing it to understand the user's needs.

    1. Intent Classification The core of understanding lies in the Reasoning Engine, which powers the AI Agent.

      • In this case, the intent is identified as “Analyze Balance Sheet and Structure Data”.

      • The Reasoning Engine understands that this involves analyzing the uploaded document, extracting key data, and organizing it in a specific template.

    2. Delegation to Sub-Agent Based on the identified intent, the AI Agent delegates the task to the appropriate Sub-Agent.

      • The relevant Sub-Agent here specializes in document analysis and data structuring tasks.

    3. Executing the Workflow

      The Sub-Agent activates the predefined workflow associated with this task. The workflow involves several steps, such as:

      1. Document Upload and OCR:

        • The user uploads the balance sheet.

    4. Final Decision & Response

      Once the workflow is completed, the AI Agent evaluates the result and determines the next steps:

      • If additional information is needed (e.g., missing data from the balance sheet), the Reasoning Engine proactively asks the user for clarification or additional documents.

      • If the task is completed, the AI Agent delivers the structured spreadsheet back to the user.

    How to create an Agent ?

    Entity Recognizers

    Overview

    The Entity Recognizer (ER) feature enables developers to define custom patterns to identify specific entities in natural language user queries. These patterns are used to extract relevant entities (e.g., email addresses, loan application numbers, phone numbers) that are critical for processing workflows efficiently.

    For example: A user submits the query, "Give me the amortization schedule for my loan application: LA:12312331."

    • Developers can define a custom regex to recognize the loan application number LA:12312331.

    • The Entity Recognition Node in the workflow can then use this pattern to extract the entity dynamically from the user query.

    How to Create a New Entity Recognizer

    1. Navigate to the Model Hub → Select the Entity Recognizer tab.

    2. Click "Create Entity Recognizer."

    3. Fill in the following fields:

      • Name: Provide a meaningful name (e.g., Loan Application Recognizer

    Using an Entity Recognizer in Workflows

    To use a created Entity Recognizer, follow these steps:

    1. Add the Entity Recognition Node to the workflow.

    2. Pass the user query as input to the node.

      • Use the special variable $agent.query, which contains the user’s natural language query.

    Example:

    • Entity Recognizer: Loan Application Number

    • Input: Give me the amortization schedule for my loan application: LA:12312331

    • Output:

    The output from this node will be used as input for the next connected node, where the named entity can be accessed using the 'result' key

    Key Use Cases for Entity Recognizers

    ✅ Extract Named Entities – Recognize and extract structured data (e.g., email addresses, IDs, dates) from user queries. ✅ Enhance Workflow Automation – Use extracted entities to dynamically route workflows or fetch related data. ✅ Handle Complex Inputs – Process unstructured natural language queries with precision.

    Key Takeaways for Developers

    ✅ Customizable Patterns – Define regex patterns to recognize specific entities tailored to your use case. ✅ Seamless Workflow Integration – Use the Entity Recognition Node to incorporate entity extraction directly into workflows. ✅ Efficient Processing – Simplify user query handling by dynamically identifying and isolating critical information. ✅ Regex Flexibility with Flags – Use flags to adjust pattern behavior for better matching and adaptability.

    By leveraging Entity Recognizers, developers can create AI workflows that intelligently extract and process key information, improving the efficiency and accuracy of automated responses.

    AI Workflow Automation

    In today's fast-paced digital landscape, where efficiency and accuracy are paramount, AI workflow automation has emerged as a transformative force. It's not merely a technological advancement; it's a strategic imperative that empowers businesses to optimize operations, enhance productivity, and unlock new realms of innovation.

    Why is AI Workflow Automation Important?

    • Efficiency and Productivity: By automating repetitive and mundane tasks, AI liberates human workers to focus on strategic, creative, and value-added activities. This streamlines processes, reduces errors, and accelerates turnaround times, leading to enhanced productivity and operational efficiency.

    • Cost Savings: Automation reduces the need for manual labor, leading to significant cost savings in the long run. Additionally, by minimizing errors and optimizing resource allocation, AI workflow automation helps businesses avoid costly rework and delays.

    • Scalability: AI-powered workflows can be easily scaled to accommodate growing business needs. This flexibility enables organizations to adapt to changing market conditions and seize new opportunities without incurring significant additional costs.

    • Data-Driven Insights: AI workflow automation generates a wealth of data that can be leveraged to gain valuable insights into business operations. These insights can be used to identify bottlenecks, optimize processes, and make informed decisions.

    • Improved Customer Experience: By automating customer-facing tasks such as order processing and support, AI can deliver faster, more personalized, and more consistent customer experiences. This can lead to increased customer satisfaction and loyalty.

    • Innovation and Growth: By freeing up resources and enabling faster, more efficient operations, AI workflow automation fosters a culture of innovation. This empowers businesses to explore new ideas, develop new products and services, and stay ahead of the competition.

    In essence, AI workflow automation is not just about doing things faster; it's about doing things smarter. It's about leveraging the power of artificial intelligence to transform the way businesses operate, compete, and grow in the digital age.

    How AI Workflow Automation Enhances AI Agents

    1. Efficiency and Focus:

      • AI workflow automation handles repetitive tasks, allowing the AI agent to concentrate on higher-level functions like natural language understanding and decision-making.

      • This division of labor improves the overall efficiency and effectiveness of the AI agent.

    2. Scalability and Adaptability:

    Key Takeaway for Developers:

    By incorporating AI workflow automation into the design and development of AI agents, you can create more intelligent, efficient, and adaptable systems that deliver superior results. Remember that the AI agent is the "brain" that makes decisions and takes action, while the AI workflow automation is the "backbone" that supports and enhances its capabilities.

    Input

    Overview

    The Input Node in UPTIQ Workbench is designed to collect user input dynamically within a workflow, ensuring that processes requiring user-provided data can proceed efficiently. This node plays a crucial role in workflows where structured or freeform input is required before the next action is executed.

    Unlike static configurations, the Input Node enables real-time user interaction, allowing workflows to adapt based on user-provided values. It supports multiple input types, including basic text, numbers, and rich text formatting, making it ideal for structured and detailed data collection scenarios.

    RAG

    Retrieval-Augmented Generation (RAG) is a framework that combines traditional information retrieval with generative AI. It enables AI agents to generate contextually accurate and factually grounded responses by retrieving relevant information from a knowledge base or data source and using it to augment the reasoning process.

    In UPTIQ AI Workbench, RAG is implemented using a sequence of components designed to manage and use data effectively in AI workflows. Below is a detailed explanation of each related concept:

    How to create a RAG?

    Model Hub

    Large Language Models (LLMs)

    UPTIQ provides access to foundational LLMs from leading providers such as OpenAI, Meta, Google, Anthropic, and Groq. This allows developers to experiment with different models and evaluate their behavior for specific AI use cases within their agents.

    Exploring Different Model Capabilities

    Each LLM has unique strengths that developers can leverage based on their needs:

    Loader

    Overview

    The Loader Node in UPTIQ Workbench is designed to enhance user experience by displaying a processing message when a workflow involves a time-consuming operation.

    When workflows require retrieving external data, running computations, or executing multi-step processes, the Loader Node ensures that users are aware of ongoing activity rather than experiencing delays with no feedback.

    This helps to reduce user frustration, improve engagement, and create a smoother conversational flow by keeping users informed during system processing.

    Inference

    What is Inference?

    Inference is the process of running a trained AI model on new data to generate predictions or insights. It is the execution phase where an AI system applies learned knowledge to new situations.

    For Example:

    Imagine you're building a task management app that helps users prioritize their to-do list. You’ve integrated an AI feature that analyzes tasks and suggests priorities (e.g., "High," "Medium," or "Low") based on past behavior. When a user adds a new task, such as "Prepare quarterly report," the app runs it through a pre-trained AI model. The model analyzes the task's description and matches it to patterns learned from past tasks (like similar descriptions being labeled as "High Priority"). Based on this, the model suggests: "High Priority".

    This is inference in action—using a trained model to make decisions or predictions for new, unseen data.

    Importance of Inference

    Large Language Models (LLMs)

    Large Language Models (LLMs) are a type of Generative AI model that focuses on understanding and generating human-like text. They are trained on vast amounts of text data and can write, summarize, translate, and even code based on input prompts.

    How It Works?

    • The model analyzes billions of words from books, articles, and the internet to learn language structure.

    • When given a prompt, it predicts the most likely next words based on its training.

    { 
      "loanAmount": 1500,
      "borrowerAge": 25
    }
    
  • Automating workflows streamlines the integration of AI agents into existing systems.

  • This makes it easier to scale AI capabilities and adapt to changing business requirements.

  • Data-Driven Improvement:

    • AI workflow automation generates valuable data that the AI agent can analyze to identify patterns and trends.

    • This data-driven approach enables continuous learning and improvement, leading to better performance and accuracy.

  • OpenAI GPT models – Strong in natural language understanding, summarization, and creative writing.

  • Meta’s LLaMA models – Optimized for efficiency and fine-tuning on specific domains.

  • Google Gemini – Enhanced for multi-modal capabilities, including text and image processing.

  • Anthropic’s Claude models – Designed with a focus on safety, low hallucination, and instruction-following.

  • Groq models – Ultra-fast inference speeds, suitable for real-time AI applications.

  • By trying different LLMs, developers can find the best fit for accuracy, efficiency, and performance in their AI solutions.

    Fine-Tuning LLMs in UPTIQ

    UPTIQ’s Model Hub includes the ability to run Fine-Tuning pipelines for any supported model.

    What is Fine-Tuning? Fine-tuning is the process of training an existing LLM on domain-specific data to improve accuracy and relevance. Instead of training from scratch, fine-tuning allows the model to:

    • Adapt to specialized vocabulary and context (e.g., financial or legal language).

    • Enhance accuracy on specific tasks like document summarization or compliance verification.

    • Reduce hallucinations by reinforcing factual correctness based on curated datasets.

    Importing Models from TogetherAI

    Developers can also import models from TogetherAI, a platform that aggregates multiple open-source LLMs and provides easy integration for inference and fine-tuning. TogetherAI enables:

    • Access to a diverse set of models beyond proprietary options.

    • Cost-efficient alternatives to running large-scale models.

    • Custom fine-tuning workflows for domain-specific enhancements.

    Custom Reasoning Engine (CustomRE)

    UPTIQ’s Custom Reasoning Engine (CustomRE) allows developers to use their own fine-tuned models as the core Reasoning Engine for AI agents. This enhances accuracy, ensures domain-specific knowledge retention, and provides greater control over responses.

    Key Benefits of CustomRE

    1. Increased Accuracy & Reduced Hallucination

      • By using a fine-tuned model, the AI can generate more precise and reliable responses tailored to the use case.

      • Reduces the risk of hallucinations by grounding responses in trusted training data.

    2. Security & Ethical Guardrails

      • Developers can enforce compliance rules by fine-tuning models on policy-compliant datasets.

      • Helps prevent bias, misinformation, or unauthorized data leakage.

      • Enables role-based access and restricted response generation for sensitive topics.

    3. Context Adherence & Consistency

      • CustomRE ensures that AI responses stay within the defined context, preventing deviation from expected behavior.

      • Ideal for applications where strict adherence to guidelines (e.g., financial compliance, legal advisory) is necessary.

    Key takeaway for developers

    Large Language Models (LLMs) in UPTIQ

    ✅ Experiment with multiple LLMs (OpenAI, Meta, Google, Anthropic, Groq) to find the best fit for your use case. ✅ Understand different model capabilities—choose models based on accuracy, response time, and task efficiency. ✅ Fine-tune models to enhance accuracy, domain expertise, and reduce hallucinations. ✅ Use TogetherAI to access and import open-source models for cost-effective AI solutions.

    Custom Reasoning Engine (CustomRE) in UPTIQ

    ✅ Leverage fine-tuned models as the reasoning engine for better control over responses. ✅ Increase accuracy & reliability by grounding AI outputs in domain-specific data. ✅ Enhance security & compliance with AI guardrails that prevent biased or unethical responses. ✅ Ensure context adherence so AI responses remain relevant and aligned with the intended use case.

    By utilizing LLMs and CustomRE effectively, developers can build more intelligent, reliable, and domain-specific AI agents within UPTIQ.

    Configurations
    Field
    Description

    Text

    The message displayed to the user while the process is ongoing. It should be concise, clear, and informative to ensure the user understands the system is actively working.

    Best Practices for Loader Messages:

    ✅ Keep messages short and direct to avoid unnecessary user confusion. ✅ Use clear phrasing to indicate that the process is in progress and will complete soon. ✅ Avoid vague terms like “Processing…”—instead, provide context such as “Retrieving loan details, please wait…”

    Example Use-Cases

    1. Indicating Data Retrieval from an External API

    A financial assistant workflow fetches credit score details from an external API. Since this might take a few seconds, the Loader Node is used to notify the user while waiting for the API response.

    • Configuration:

      • Text: "Fetching your credit score, please wait..."

    • Outcome:

      • The message displays to the user while the API retrieves the data.

      • Once the API call is completed, the workflow proceeds to display the credit score.

    2. Handling a Long Computation for Risk Assessment

    A workflow in a loan approval system performs a complex risk analysis, which involves checking multiple financial parameters.

    • Configuration:

      • Text: "Analyzing financial data, this may take a moment..."

    • Outcome:

      • The user remains informed that the system is processing their risk assessment.

      • The workflow seamlessly continues once computation completes.

    Key Takeaways for Developers

    ✅ Improves User Experience – Prevents confusion by clearly informing users that their request is being processed.

    By incorporating the Loader Node, developers can manage user expectations, enhance workflow responsiveness, and create AI-driven interactions that feel fluid and natural. 🚀

    Translates AI model training into real-world decision-making.
  • Enables real-time processing of user inputs.

  • Powers AI-driven applications by converting raw data into meaningful actions.

  • Bridges the gap between model development and deployment.

  • Traditional Challenges

    • High Latency: Running complex models in real-time can be slow.

    • Resource Constraints: AI models require significant computing power, which is costly.

    • Model Accuracy in Production: A model may perform well in training but struggle in real-world scenarios.

    • Scalability: Handling thousands or millions of inferences per second requires optimized infrastructure.

    How Generative AI Models Solve These Challenges

    • Optimized Model Architectures: Generative AI models, such as transformers, are fine-tuned to balance complexity and performance. Techniques like model distillation, quantization, and pruning make them lighter and faster, reducing latency without sacrificing output quality.

    • Adaptive Inference with Few-Shot Learning: Generative AI models can leverage few-shot or zero-shot capabilities to minimize the need for retraining, allowing them to perform well on unseen tasks with minimal additional data.

    • Edge and Cloud Deployment: Generative AI models are increasingly deployed using hybrid setups where simpler, lightweight versions run on edge devices for real-time responses, while larger, resource-intensive models operate in the cloud for complex tasks.

    • Efficient Hardware Utilization: Generative AI models are optimized to utilize modern hardware accelerators like GPUs and TPUs. Additionally, frameworks like ONNX Runtime and TensorRT streamline inference processes for high efficiency.

    • Dynamic Fine-Tuning and Adaptation: Generative AI models use techniques such as Reinforcement Learning from Human Feedback (RLHF) to dynamically adapt to production scenarios, improving accuracy while staying relevant to real-world conditions.

    • Scalable Infrastructure: Generative AI systems leverage distributed computing and load balancing to handle massive inference demands efficiently. Pre-caching responses for commonly generated outputs further optimizes performance in high-traffic scenarios.

    New Possibilities Enabled

    • Real-Time AI Applications: Instant response times for AI-powered assistants, chatbots, and automation.

    • Personalized Experiences: AI can infer user preferences and behaviors in real-time, improving recommendations and interactions.

    • Scalable AI Services: Cloud-based inference allows businesses to serve millions of AI predictions efficiently.

    • Embedded AI: AI-powered decision-making can be deployed in mobile apps, IoT devices, and autonomous systems.

    Advanced LLMs use techniques like transformers and attention mechanisms to generate context-aware responses.

    Examples:

    • GPT-4 (by OpenAI): Advanced LLM for text generation.

    • Claude (by Anthropic): AI chatbot focused on safety and helpfulness.

    • PaLM (by Google): Google's LLM for conversational AI.

    Where It’s Used?

    • Chatbots and AI Assistants (e.g., customer support).

    • Automating report generation in financial services.

    • Coding assistance (e.g., GitHub Copilot).

    Capabilities of LLMs

    1. Natural Language Understanding (NLU): LLMs can comprehend human language, including context, sentiment, and intent. Example: An LLM-powered chatbot in banking can understand customer queries about loan eligibility.

    2. Text Generation & Summarization: Can generate text, complete sentences, and summarize long documents. Example: A financial analyst can use an LLM to summarize a lengthy stock market report in simple terms.

    3. Conversational AI: LLMs can engage in meaningful conversations and answer queries contextually. Example: AI-powered customer support in a bank can answer questions about credit card billing.

    4. Code Generation & Debugging: Can assist in writing and debugging programming code. Example: A fintech developer can use an LLM to generate Python code for calculating mortgage interest rates.

    5. Multilingual Translation: Can translate text between different languages efficiently. Example: A global investment firm can translate financial reports into multiple languages for stakeholders.

    6. Data Extraction & Analysis: Can process large datasets and extract key insights. Example: A compliance officer in a bank can use an LLM to extract critical information from thousands of legal contracts.

    Limitations of LLMs

    1. Lack of Real-Time Knowledge: LLMs rely on past training data and might not have up-to-date information. Example: An LLM might not provide real-time stock prices or latest regulatory changes unless integrated with live data sources.

    2. Bias in Training Data: If the training data contains biases, the model may produce biased outputs. Example: An LLM might generate biased loan approval recommendations if the training data lacks diversity.

    3. Limited Understanding of Context: While LLMs are good at pattern recognition, they don’t truly "understand" concepts. Example: An AI assistant might misinterpret a complex legal clause in a financial agreement.

    4. High Computational Cost: Running and training LLMs require massive computational power and energy. Example: A small fintech startup might struggle to afford high-performance AI models without cloud-based solutions.

    5. Security & Privacy Concerns: LLMs may generate or expose sensitive data if not properly managed. Example: A financial chatbot might inadvertently share personal banking details if security measures are not in place.

    Loan Amount

    $input.loanAmount (1500)

    Borrower Age

    $input.borrowerAge (25)

    Tables

    Overview

    Tables in Workbench provide an abstract database layer that allows developers to persist data within agentic workflows. Think of it as a custom data storage system where developers can define and manage structured records without worrying about underlying database complexities.

    This feature is particularly useful when AI agents need to store, retrieve, and manipulate data across workflows—such as tracking user interactions, logging processed data, or maintaining reference records.

    🔹 Storage Limitation: Persistent database space is limited by default but can be expanded separately if developers anticipate storing large volumes of data within AI agents.

    How to Create a Table?

    There are two ways to create a Table in UPTIQ:

    1. From Config & Utils (Recommended for Reusability)

    1. Navigate to Config & Utils → Tables Tab.

    2. Click "Create Table."

    3. Enter a name for the table (this serves as a unique identifier for data storage).

    4. Click Save—the table is now ready for use in workflows.

    2. Directly from a Workflow Node (Quick Method)

    1. Drag a Table Write/Read node into a workflow.

    2. Click on the node to open the side panel.

    3. Click "Add Table"—this automatically assigns the table to the current agent.

    4. Save the table definition.

    How to Use Tables in AI Agent Development

    Developers can leverage persistent tables to:

    ✅ Store Data Across Workflow Executions – Maintain records of AI-generated outputs, processed user inputs, or any other structured data.

    ✅ Retrieve & Reuse Data – Query stored information to enhance AI responses, track user history, or fetch reference data dynamically.

    ✅ Automate Business Logic – Use tables to store intermediate results that can be accessed by multiple workflow nodes, reducing redundant computations.

    ✅ Enable Long-Term Data Persistence – Unlike temporary workflow variables, tables retain data across workflow executions, allowing AI agents to operate with stateful memory.

    Best Practices for Using Tables in Workflows

    ✔ Design Tables Thoughtfully – Only store data that is required for AI workflows. Avoid persisting unnecessary information to conserve storage.

    ✔ Regularly Clean Up Data – Since storage space is limited, implement periodic clean-ups for expired or unnecessary records.

    ✔ Ensure Data Security & Compliance – Be mindful of storing sensitive user data and implement access control mechanisms where required.

    Key Takeaways for Developers

    ✅ Use Tables to Persist AI Data – Maintain structured records that workflows can access and update dynamically. ✅ Store & Query Information Efficiently – Design data models that support your AI agent’s functionality. ✅ Leverage Tables for Stateful AI Workflows – Enable AI agents to retain context between workflow executions. ✅ Manage Storage Effectively – Be mindful of storage limitations and optimize data persistence strategies.

    By integrating Tables in workflows, developers can enhance AI capabilities, improve decision-making, and enable long-term data-driven automation.

    Create a Sub Agent.
    ).
  • Pattern: Enter a regex pattern for the entity you wish to recognize.

    • Example: To identify loan application numbers in the format LA:12312331, use the regex LA:\d{8}.

  • Flags: Optionally, specify regex flags to modify the pattern’s behavior.

    • Flags Explanation:

      • g → Global flag to find all matches in the input.

      • i → Case-insensitive matching.

      • s → Enables the dot . to match newline characters.

  • Click "Create" to save the recognizer.

  • The node processes the input using the defined Entity Recognizer and outputs the recognized entity as shown in example below.
    Configurations

    Type (Required)

    Defines the format of user input to be collected. Supported types:

    • Rich Editor: Provides a rich text editor for detailed responses, such as structured reports, summaries, or project documentation.

    • String/Number: Collects basic text or numeric input, useful for entering loan amounts, names, or other direct values.

    Rich Editor Input Configuration (Optional, applicable only when Type = Rich Editor)

    • Template: Predefines a structured format to guide user input in the rich text editor.

    • Example: A template prompting users to enter an executive summary and key objectives.

    Output Format

    • The collected user input is structured in following JSON format

    Example Use-cases

    1. Capturing Detailed User Input in a Loan Application

    Scenario: A financial institution requires applicants to provide a loan justification with structured details about their project. The Input Node, configured as a Rich Editor, guides the applicant in submitting the required information.

    Configuration for Rich Editor Input

    Field
    Value

    Type

    Rich Editor

    Template

    <p><strong>Executive Summary:</strong> Please provide a brief overview of the project.</p><p><strong>Key Objectives:</strong></p><ul><li>Objective 1</li><li>Objective 2</li><li>Objective 3</li></ul>

    Output

    🔹 Why this is useful: This ensures that the applicant provides structured and detailed information, improving processing efficiency and data consistency.


    2. Collecting Numeric Input for Loan Amount

    Scenario: A loan application workflow requires the user to enter the requested loan amount before proceeding to eligibility checks. The Input Node is configured to accept a number as input.

    Configuration for Numeric Input

    Field
    Value

    Type

    Number

    Output

    🔹 Why this is useful: The workflow can now process the entered amount, apply eligibility criteria, and proceed with loan approval steps dynamically.

    Key Takeaways for Developers

    ✅ Enables User-Driven Workflows – Collect user input in real-time, ensuring workflows proceed only when required information is provided.

    ✅ Supports Multiple Input Types – Choose between basic text, numbers, or rich text to accommodate different data collection needs.

    ✅ Structured Input with Templates – Use predefined templates in Rich Editor mode to ensure users provide information in a consistent format.

    ✅ Seamless Integration – Output data is formatted in JSON, allowing smooth processing in subsequent workflow nodes, such as AI models, validation logic, or database storage.

    By incorporating the Input Node, developers can enhance interactivity in UPTIQ workflows, enabling intelligent and user-driven automation. 🚀

    {
      "result": ["[email protected]"]
    }
    {
      "userInput": "<user-entered value will be available here. Accessible via 'userInput' key>"
    }
    {
      "userInput": "<p><strong>Executive Summary:</strong> This project aims to improve operational efficiency by automating key processes.</p><p><strong>Key Objectives:</strong></p><ul><li>Streamline workflow management</li><li>Reduce manual errors</li><li>Enhance reporting capabilities</li></ul>"
    }
    { "userInput": 10000 }
    The Sub-Agent uses OCR (Optical Character Recognition) to extract structured data from the document.
  • Data Analysis:

    • The extracted data is analyzed for key metrics like assets, liabilities, and equity.

  • Template Structuring:

    • The analyzed data is formatted into the user’s specified spreadsheet template.

    • Any required calculations (e.g., net income or ratios) are applied based on the template’s logic.

  • Example Output: A polished spreadsheet populated with the analyzed balance sheet data, fully formatted according to the user’s template.

    What is a Data Store?
    • Definition: A data source is the origin of the data, which can be structured or unstructured.

    • Examples:

      • Unstructured: Files such as PDFs, CSVs, images, or documents.

      • Structured: Databases such as PostgreSQL, MongoDB, or MySQL.

    • Purpose: Data sources feed raw information into the system, which can later be processed and used for workflows.

    How to create a data store?

    For details on how to create a data store, Watch the setup guide

    What is a Data Source?

    • Definition: A data store in UPTIQ is an entity that groups multiple data sources together. It acts as an organizational layer, allowing developers to manage and access related data sources as a single logical unit.

    • Key Features:

      • Supports the integration of multiple data sources.

      • Provides a unified interface for interacting with grouped data.

    • Example: A single data store could include a combination of:

      • Product catalogs (from a MySQL database),

      • Policy documents (PDFs), and

      • Logs (CSV files).

    What is a Vector Store?

    • Definition: A vector store converts the content of a data store into vector embeddings and stores these embeddings using a vector database.

    • How It Works:

      • Takes raw data from the data store (structured or unstructured).

      • Processes the data to generate vector embeddings using AI models.

      • Stores these embeddings in a vector database for efficient retrieval.

    • Purpose: Vector embeddings represent the semantic meaning of the data, enabling similarity-based searches. This is crucial for identifying and retrieving contextually relevant information.

    • Underlying Technologies: Vector databases by renowned providers like MongoDb, Pinecone, Postgres.

    What is a RAG Container?

    • Definition: A RAG container is the entity that links to the vector store and exposes its capabilities for use in workflows via a special RAG node.

    • How It Works:

      1. Association with Vector Store: The RAG container uses the embeddings stored in the vector store to perform retrieval-augmented reasoning.

      2. Exposed to Workflows: Developers can add the RAG node in a low-code/no-code manner to integrate retrieval functionality directly into workflows.

      3. Dynamic Interaction: The reasoning engine retrieves relevant information via the RAG container and uses it to augment intent classification and response generation.

    • Developer's Role:

      • Create or configure the RAG container associated with a vector store.

      • Use the RAG node in workflows to retrieve and incorporate relevant knowledge dynamically.

    • Example in Workflow: When a user asks, "Show me the loan terms for Plan B," the RAG container retrieves the most relevant embeddings from the vector store (e.g., a section of a document or database record) and feeds it into the reasoning engine (if used in knowledge) or LLMs (if used in RAG node) for a precise response.

    How These Components Work Together

    1. Data Ingestion:

      • Developers add multiple data sources (PDFs, databases, etc.) to a data store.

    2. Vectorization:

      • The vector store processes the data in the data store, generating vector embeddings and storing them in a vector database.

    3. RAG Integration:

      • The RAG container is linked to the vector store.

      • Developers integrate the RAG node into workflows, enabling retrieval and augmentation in their AI agent's logic.

    4. Dynamic Query Handling:

      • A user query is processed by the reasoning engine.

      • The RAG container retrieves relevant embeddings from the vector store.

      • Retrieved data is used to generate an accurate, context-rich response.

    Key takeaway for developers

    ✅ Understanding these concepts is essential for leveraging the full potential of UPTIQ’s low-code/no-code workflows.

    ✅ Think of these components as tools in a developer's toolkit:

    • Data Sources: Raw materials.

    • Data Store: Organizational structure.

    • Vector Store: Advanced search engine.

    • RAG Container: Intelligent retrieval and integration.

    ✅ Mastering the RAG allows developers to build sophisticated AI agents capable of delivering accurate and contextually relevant responses to users

    Learn More

    Code Snippets

    Overview

    Like many low-code/no-code platforms, UPTIQ allows developers to inject custom code into workflows when required. The Workbench provides the ability to execute JavaScript code using Code Snippets, which can be embedded in workflows through the "Javascript" Node.

    What is a code snippet?

    A Code Snippet is a simple JavaScript function named main, which:

    • Takes input from the previous node in the workflow.

    • Stores that input in a special variable input, making it available throughout the function.

    • Outputs the processed result, which becomes the input for the next node in the workflow.

    Understanding 'Input' in Code Snippets

    Since every node’s output becomes the input for the next node, the data type of the input variable depends on the preceding node’s output:

    • If the previous node outputs a JSON object, input will be of type JSON.

    • If the previous node outputs a string or number, input will hold that respective type.

    This flexibility allows developers to manipulate, transform, or filter data dynamically within workflows.

    When to Use Code Snippets?

    Developers can use JavaScript Nodes for: ✅ Data Formatting & Transformation – Convert data formats, adjust values, or prepare data for API calls. ✅ Basic Filtering – Remove unnecessary fields or extract key data before passing it to the next node. ✅ Applying Business Logic – Execute programmatic operations that aren’t natively supported by other workflow nodes. ✅ Custom Computations – Perform calculations or data aggregations before the AI agent processes the result.

    Best Practices & Pitfalls of Using JavaScript Nodes

    🚨 Avoid Overuse of JavaScript Nodes While JavaScript nodes add flexibility, excessive use can:

    • Reduce Workflow Readability – Business logic may become hidden inside code, making workflows harder to understand.

    • Complicate Debugging & Maintenance – Unlike visual nodes, JavaScript logic isn’t easily visible, which can make troubleshooting time-consuming.

    • Break the Low-Code Advantage – Workflows should remain as low-code as possible; use JavaScript only when absolutely necessary.

    💡 Recommendation: Use JavaScript Nodes sparingly. Always check if the same transformation or filtering can be achieved with other workflow nodes before resorting to custom code.

    How to create a code snippet?

    There are two ways to create a Code Snippet:

    1. From Config & Utils

    1. Navigate to Config & Utils → Code Snippet Tab.

    2. Click "Create Code Snippet."

    3. Enter a meaningful name for easy identification in workflows.

    4. Select the agent where the code snippet will be used.

    2. Directly from a JavaScript Node (Quick Method)

    1. Drag a JavaScript Node into the workflow.

    2. Click on the node to open the side panel.

    3. Click "Add Script"—this creates a new snippet and automatically assigns it to the agent.

    4. Save the snippet and later update it with the required business logic.

    Key Takeaways for Developers

    ✅ Leverage JavaScript Nodes for targeted logic execution, such as data transformations and filtering. ✅ Ensure workflow readability by limiting JavaScript usage to essential cases. ✅ Use reusable Code Snippets to maintain modularity and avoid redundant script writing. ✅ Prioritize low-code workflow nodes whenever possible for better maintainability and collaboration.

    By using Code Snippets effectively, developers can enhance workflow capabilities without compromising the benefits of low-code/no-code development in UPTIQ.

    API Node

    Overview

    The API Call Node in UPTIQ Workbench enables workflows to interact with external APIs by performing HTTP requests. This node allows developers to fetch, send, update, or delete data from external services in a structured and automated manner.

    By leveraging the API Call Node, developers can integrate their workflows with third-party APIs, internal services, or cloud endpoints, ensuring seamless data exchange between systems.

    Configurations

    Endpoint (Required)

    • The URL of the external API to which the request is sent.

    • Supports static URLs or dynamic variables fetched from workflow data.

    Method (Required)

    • Defines the type of HTTP request to be made.

    • Supported methods: ✅ GET – Retrieve data from the API. ✅ POST – Send new data to the API. ✅ PUT – Update an existing resource. ✅ PATCH – Modify part of an existing resource. ✅ DELETE – Remove a resource.

    Headers (Optional)

    • A set of key-value pairs representing HTTP headers to include in the request.

    • Example: { "Authorization": "Bearer <token>", "Content-Type": "application/json" }.

    Parameters (Optional)

    • Query parameters to be appended to the URL.

    • Example: { "userId": "12345" } results in https://api.example.com/resource?userId=12345.

    Request Body (For POST, PUT, PATCH requests only)

    • The payload sent with the request.

    • Can be in raw JSON format or key-value format.

    • Example: { "name": "John Doe", "email": "[email protected]" }.

    Output Format

    • If the request is successful, the node outputs:

    • If the request fails, the node outputs:

    Example Use-Cases

    Example 1: Fetching Data from an External API (GET Request)

    A workflow needs to retrieve user details from a third-party service.

    Configuration:

    • Endpoint: https://jsonplaceholder.typicode.com/todos/1

    • Method: GET

    Response Output:


    Example 2: Sending Data to an API (POST Request)

    A workflow needs to create a new user record in an external system.

    Configuration:

    • Endpoint: https://api.example.com/users

    • Method: POST

    • Headers: { "Content-Type": "application/json" }

    Response Output (Success):

    Response Output (Error):

    Key Takeaways for Developers

    ✅ Seamless API Integration – Easily connect UPTIQ workflows with external APIs for data exchange. ✅ Supports All Major HTTP Methods – Perform GET, POST, PUT, PATCH, and DELETE requests. ✅ Flexible Configuration – Customize headers, query parameters, and request body as per API requirements. ✅ Handles API Responses Efficiently – Captures both successful data responses and error messages for better debugging. ✅ Use with Dynamic Variables – Fetch endpoint URLs and request data dynamically from previous nodes for dynamic API calls.

    By leveraging the API Call Node, developers can extend UPTIQ Workbench workflows beyond internal processes, integrating them with external platforms, databases, and third-party applications for automated and intelligent data handling. 🚀

    Ruleset

    Overview

    The Ruleset Node in UPTIQ Workbench serves as an abstraction that allows developers to integrate Rulesets into workflows seamlessly. It helps evaluate specific records against predefined business rules.

    Rulesets are defined separately and can be used to enforce business logic, such as eligibility checks, risk assessments, or compliance validation. The Ruleset Node enables developers to apply these rules dynamically to real-time data, ensuring that decisions are made based on runtime inputs.

    For detailed information on how Rulesets are created and managed, refer to the Ruleset Section in the developer guide.

    Configurations

    Field
    Description

    Mappings Explained

    Once a Ruleset is selected, developers must map input data to the Facts defined in the Ruleset. This mapping consists of:

    1. Target (Fact in Ruleset)

      • This field is auto-populated based on the Facts defined in the selected Ruleset.

      • Example: If the Ruleset has Facts LoanAmount and Age, these will appear as Targets.

    Example Mapping Setup:

    When executed, the workflow dynamically assigns runtime values to the Ruleset’s Facts, allowing the rules to be evaluated against real data.

    Example Use-Case

    1. Applying Knockout Rules for Borrowers

    Scenario: A loan origination workflow applies knockout rules to identify applications that should be declined before entering the origination process. A Ruleset named "Borrower Knockout Rules" has been defined with the following Facts and Rules:

    • Fact: LoanAmount → Rule: Must be greater than $100

    • Fact: Age → Rule: Must be greater than 18

    At runtime, loan application details are retrieved from a table using a Table Read Node and passed to the Ruleset Node.

    Loan Application Data (Fetched at Runtime)

    Ruleset Node Configuration

    Field
    Value

    Workflow Execution & Ruleset Evaluation

    During execution, the Ruleset Node evaluates the conditions based on runtime values:

    • LoanAmount = 150 → ✅ Rule Passed (Approved)

    • Age = 19 → ✅ Rule Passed (Approved)

    Ruleset Node Output Format

    🔹 What This Means: The borrower meets the knockout rule conditions, so the application should proceed to the next stage in the workflow.

    Key Takeaways for Developers

    ✅ Automates Business Rule Enforcement – The Ruleset Node applies pre-defined logic dynamically to evaluate records against business rules in real-time.

    ✅ Seamless Workflow Integration – Developers can map runtime values from previous nodes (such as Table Read or API responses) directly into Ruleset Facts, ensuring rules are evaluated against live data.

    ✅ Enables Conditional Workflow Execution – By checking which rules pass or fail, developers can design workflows that automatically approve, reject, or trigger additional actions based on rule evaluation.

    ✅ Improves Decision-Making Efficiency – Instead of hardcoding business logic into workflows, rules are managed separately in a Ruleset, making updates easier and reducing workflow complexity.

    ✅ Flexible Data Mapping – Mappings allow data to be sourced from agent-level variables, previous node outputs, or static values, giving developers full control over runtime rule execution.

    By leveraging the Ruleset Node, developers can implement complex business logic in a structured, reusable, and scalable manner, ensuring that workflows execute with rule-based intelligence. 🚀

    Prompt Engineering

    What It Is?

    Prompt engineering is the skill of crafting the right prompts to get the best responses from AI language models like chatbots or AI assistants. Since AI doesn't "think" like humans, the way you phrase your prompts significantly impacts the quality of the output.

    Here's an analogy to help you understand prompt engineering: Imagine you're asking a librarian for help finding a book. If you simply say, "I want a book," the librarian might not know where to start. But if you say, "I'm looking for a historical novel set in the 19th century about a female protagonist," the librarian can provide a more specific and helpful response.

    The same principle applies to prompt engineering. By providing clear, concise, and informative prompts, you can guide the AI model to generate more accurate, relevant, and creative responses.

    How Does It Work?

    Crafting effective prompts is crucial for getting the most out of AI language models. Let's delve deeper into the comparison between ineffective and effective prompts, and explore additional examples across various domains to illustrate the key principles of prompt engineering.

    Ineffective vs. Effective Prompts: A Deeper Dive

    The initial example showcases the stark contrast between a vague and a well-structured prompt. "Tell me about loans" is too broad and open-ended, yielding potentially overwhelming and unfocused results. In contrast, "What are the steps involved in obtaining a home loan? What documentation is required, and what criteria are used for approval?" demonstrates specificity, guiding the AI towards a targeted and informative response.

    Examples

    Historical Research:

    • Ineffective: "Tell me about World War II."

    • Effective: "Analyze the causes of World War II, focusing on the role of political ideologies and economic tensions."

    Creative Writing:

    • Ineffective: "Write a story."

    • Effective: "Write a science fiction short story about a time traveler who accidentally alters the course of history."

    Scientific Inquiry:

    • Ineffective: "Explain climate change."

    • Effective: "Discuss the impact of human activities on climate change, specifically the role of greenhouse gas emissions."

    Technical Support:

    • Ineffective: "My computer isn't working."

    • Effective: "I'm encountering a blue screen error on my Windows 10 laptop. What troubleshooting steps can I take?"

    The Power of Prompt Engineering

    By mastering the art of prompt engineering, you can unlock the full potential of AI language models. Well-crafted prompts enable you to extract precise information, generate creative content, and explore complex topics with remarkable ease and efficiency. Remember, the quality of the AI's output is directly influenced by the quality of your input.

    Here are some tips for effective prompt engineering:

    • Clarity and Specificity: The foundation of a good prompt is clarity. Clearly articulate your request, leaving no room for ambiguity. Be specific about the format, style, and tone you expect in the response.

      Example:

      • Instead of: "Write about AI."

      • Use: "Write a 300-word article about the benefits of AI in healthcare, using a professional tone and including three examples of applications."

    Advanced Prompting Techniques

    • Prompt Chaining: Break down complex tasks into a sequence of simpler prompts, each building on the output of the previous one. Learn More

    • Prompt Interpolation: Combine multiple prompts or prompt elements to generate more nuanced and sophisticated responses. Learn More

    • Prompt Optimization: Use machine learning techniques to automatically optimize prompts for specific tasks or desired outcomes. Learn More

    Ethical Considerations

    • Bias Mitigation: Be mindful of potential biases in the AI model and take steps to mitigate them through careful prompt design and output evaluation.

    • Harmful Content Prevention: Implement safeguards to prevent the AI from generating harmful or offensive content.

    • Transparency and Accountability: Clearly communicate the limitations of the AI model and take responsibility for the outputs it generates.

    AI Agents vs LLM-based APPs

    From a developer's perspective, AI Agents and LLM-based apps like ChatGPT differ significantly in terms of architecture, capabilities, and use cases.

    LLM-based apps are primarily focused on generating text based on a given prompt. They excel at tasks such as language translation, summarization, and content creation. However, their functionality is limited by their reliance on pre-trained models and their inability to interact with external systems or perform actions beyond generating text.

    AI Agents, on the other hand, are designed to be more versatile and capable of performing a wider range of tasks. They can interact with their environment, make decisions, and take actions based on their goals. This is achieved through the integration of various components, such as perception modules, decision-making algorithms, and action execution mechanisms.

    Why AI Agents Are the Next Step Beyond LLM-Based Apps

    LLM-based apps have provided significant advancements in how users interact with software, but they have notable limitations. AI Agents address these limitations by offering context awareness, real-world action capabilities, and decision-making autonomy. Below is a detailed comparison:

    1. Overcoming Limited Context with AI Agents LLM-Based Apps: Struggle with Context Retention

      • LLM-based apps typically rely on a stateless approach, meaning they process each user input independently.

      • While modern models support longer context windows, they still struggle with remembering past interactions over long sessions.

      How AI Agents Solve This

    Key Differences Between AI Agents and LLM-Based Apps

    Platform Shift & Evolution

    • AI agents represent a major shift from traditional SaaS and LLM-based apps.

    • Historically, software architecture evolved with platform changes (e.g., mainframes → cloud).

    • Now, we’re moving from software-driven apps to AI-driven agents.

    AI Agents vs. LLM-Based Apps

    • LLM-based apps: These are applications that use large language models (LLMs) to enhance user interactions but still function as traditional apps.

    • AI Agents: These are autonomous, goal-oriented systems that perform tasks on behalf of users with minimal human intervention.

    Functionality Differences

    • LLM-based apps require user input and respond accordingly.

    • AI agents proactively take action based on intent, context, and automation.

    The Future of Agents

    • Agents will integrate deeply into workflows, replacing static SaaS interfaces.

    • Instead of navigating multiple apps, users will interact with agents that dynamically execute tasks across various systems.

    Implication for Developers

    • Developers will need to build AI-native architectures instead of just embedding LLMs into traditional apps.

    • AI agents will require new frameworks for decision-making, autonomy, and integration.

    Key Takeaway for developers.

    AI agents are not just chatbots or enhanced LLM-based apps—they are autonomous systems designed to replace traditional apps by executing actions dynamically. They consist of a reasoning engine for intent classification, inference, and task execution orchestration.

    Storage Read

    Overview

    The Storage Read Node is the counterpart to the Storage Write Node, enabling developers to retrieve document metadata from storage using a storageId.

    Important Note: This node does not return the actual document file. Instead, it retrieves the documentId, which serves as the primary key within the system for document identification.

    If developers need to fetch the actual document contents, they can use the documentId returned by this node in conjunction with the Fetch Document Node.

    Common Workflow Pattern for Storage Read Usage

    1️⃣ Retrieve the document metadata using the Storage Read Node (returns documentId). 2️⃣ Pass the documentId to the Fetch Document Node to get the actual file contents. 3️⃣ Process the document further using OCR, AI extraction, or other workflow steps.

    🔹 Example Use-Case: In a loan processing system, an AI agent may need to retrieve previously stored financial statements. The Storage Read Node fetches the documentId, which is then passed to Fetch Document Node to extract and analyze the contents.

    Configurations

    Field
    Description

    Execution Flow:

    1️⃣ The Storage Read Node receives a storageId as input. 2️⃣ It retrieves the corresponding documentId, which serves as the system’s primary key for document identification. 3️⃣ The documentId can then be used in a Fetch Document Node to retrieve the actual file.

    Output Format:

    • documentId → A unique identifier that represents the stored document.

    Example Use-Cases

    Use-Case 1: Retrieving KYC Documents for Loan Applications

    A loan processing workflow needs to retrieve previously stored KYC documents (e.g., ID Proof, Address Proof) for verification before approval.

    Configuration:

    Field
    Value

    Execution Process:

    1️⃣ The Storage Read Node retrieves the documentId associated with the stored KYC document. 2️⃣ The returned documentId is passed to the Fetch Document Node to retrieve the actual file. 3️⃣ The document is verified by AI models or manual review before final loan approval.


    Use-Case 2: Fetching Financial Statements for Business Loan Assessment

    A financial AI agent needs to retrieve and analyze past financial statements submitted during a business loan application.

    Configuration:

    Field
    Value

    Execution Process:

    1️⃣ The Storage Read Node fetches the documentId for the financial statement. 2️⃣ The documentId is passed to the Fetch Document Node to retrieve the statement file. 3️⃣ The retrieved statement is processed by an AI model to extract financial insights.


    Use-Case 3: Accessing Archived Invoices for Auditing

    An enterprise finance system requires invoices stored in the system to be retrieved for tax filing and audits.

    Configuration:

    Field
    Value

    Execution Process:

    1️⃣ The Storage Read Node retrieves the documentId for the stored invoice. 2️⃣ The documentId is passed to the Fetch Document Node, which fetches the actual invoice. 3️⃣ The invoice is reviewed by finance teams for tax compliance and auditing.

    Intent

    What is an Intent in Sub Agent?

    An Intent represents a specific goal or action that an AI sub-agent in UPTIQ is designed to handle based on user input. It helps the reasoning engine understand and classify user queries to trigger the appropriate response or workflow.

    Key Components of an Intent:

    1. Intent Name

      • A unique identifier for the intent.

      • Should be easy to distinguish from other intents within the same sub-agent.

    2. Intent Description

      • A clear and comprehensive explanation of the intent’s purpose.

      • Used by the UPTIQ reasoning engine to match user queries accurately.

      • Should be detailed yet simple for optimal LLM processing.

    3. Intent Examples

      • Up to five sample queries that help the reasoning engine learn how users may phrase their requests.

      • These examples improve intent recognition accuracy.

    How Intent Execution Works in UPTIQ AI Sub-Agents

    When a user query is triggered, UPTIQ's reasoning engine follows a structured process to identify and execute the appropriate intent. Here's how it works:

    1. User Query Processing

    • The AI receives the user's input (e.g., "What is the total liability value?").

    • The reasoning engine analyzes the query and compares it with defined intent examples to determine the best match.

    2. Intent Matching

    • If the query closely aligns with an existing intent's name, description, or examples, the system selects that intent.

    • If no exact match is found, the engine attempts to generalize the query to the closest related intent.

    3. Workflow Execution

    • Once an intent is matched, the system automatically triggers the associated workflow.

    • The workflow defines the next steps, such as:

      • Fetching relevant information from connected documents or databases.

      • Performing calculations or data extractions.

    4. Response Generation

    • The AI processes the workflow output and delivers a structured response to the user.

    • The response is formulated based on the data retrieved and may include text, summaries, or extracted document details.

    5. Continuous Learning & Improvement

    • If users frequently ask queries that don’t match existing intents, developers can refine intents by:

      • Adding new examples to improve recognition.

      • Modifying intent descriptions for better clarity.

      • Creating new intents for uncovered scenarios.

    Key takeaways for developers

    ✅ Purpose-Driven Design – Intents represent specific user goals or actions, enabling AI agents to deliver accurate and contextually relevant responses.

    ✅ Clear Naming for Easy Identification – Use unique, descriptive names to distinguish intents within sub-agents, improving clarity and organization in workflows.

    ✅ Comprehensive Descriptions – Provide clear, unambiguous descriptions to help the UPTIQ reasoning engine match user queries effectively.

    ✅ Use Examples for Precision – Add up to five examples of user queries to guide the reasoning engine and improve accuracy in intent recognition.

    ✅ Workflow Integration – Each intent is automatically linked to a workflow stub, allowing developers to define precise actions when an intent is matched.

    ✅ Iterative Refinement – Regularly update intent examples and descriptions to handle evolving user queries and improve performance over time.

    ✅ Modular Scalability – Intents can be expanded and refined without disrupting the functionality of other intents or workflows, ensuring scalability and flexibility.

    By designing and maintaining well-structured intents, developers can create intelligent, responsive, and user-centric AI agents within UPTIQ.

    How to create an intent in Sub Agent?

    Fetch Document

    Overview

    The Fetch Document Node is responsible for retrieving the actual content of a document using its documentId. While the Storage Read Node retrieves the documentId from storage, this node takes that documentId and returns a pre-signed URL, allowing developers to access the actual file securely.

    🔹 Key Purpose: ✅ Fetch the actual document file that was previously stored. ✅ Generate a pre-signed URL for secure document access. ✅ Enable document viewing in a document viewer or AI processing pipeline.

    Common Workflow Pattern for Fetch Document Usage

    1️⃣ Retrieve the documentId using the Storage Read Node (or from previous workflow execution, variables) 2️⃣ Pass the documentId to the Fetch Document Node to obtain the pre-signed URL. 3️⃣ Use the pre-signed URL to display the document in a document viewer or process it further in an AI workflow.

    🔹 Example Use-Case: A loan processing system retrieves a previously submitted financial statement. The Storage Read Node fetches the documentId, and the Fetch Document Node generates a pre-signed URL, which is used to display the document in a viewer for verification.

    Configurations

    Field
    Description

    Execution Flow:

    1️⃣ The Fetch Document Node receives a documentId as input. 2️⃣ It generates a pre-signed URL, allowing secure access to the actual document. 3️⃣ The pre-signed URL can be used in a document viewer or passed to another workflow node for further processing.

    Output Format:

    • documentUrl → A secure, temporary link to access the document.

    🔹 Important Note: ✔ The pre-signed URL is time-limited for security reasons. ✔ Developers should use the documentUrl immediately or refresh it when needed.

    Example Use-Cases

    Use-Case 1: Displaying a KYC Document in a Document Viewer

    A loan processing workflow requires a loan officer to review a user’s submitted KYC documents (e.g., Passport, Address Proof) before approval.

    Configuration:

    Field
    Value

    Execution Process:

    1️⃣ The Storage Read Node retrieves the documentId of the stored KYC document. 2️⃣ The Fetch Document Node generates a pre-signed URL for secure document access. 3️⃣ The document URL is displayed in a document viewer for manual review.


    Use-Case 2: Extracting Information from a Stored Invoice

    An AI-powered invoice processing system needs to extract key financial data from a previously uploaded invoice.

    Configuration:

    Field
    Value

    Execution Process:

    1️⃣ The Storage Read Node fetches the documentId of the invoice. 2️⃣ The Fetch Document Node generates a pre-signed URL for secure access. 3️⃣ The document URL is passed to an OCR-based AI model, which extracts text and numeric values.


    Use-Case 3: Automating the Retrieval of Financial Statements

    A business loan underwriting system requires AI models to analyze past financial statements stored in the system.

    Configuration:

    Field
    Value

    Execution Process:

    1️⃣ The Storage Read Node retrieves the documentId of the financial statement. 2️⃣ The Fetch Document Node generates a pre-signed URL for AI processing. 3️⃣ The document is analyzed by an AI model to assess financial health and eligibility.

    Key Takeaways for Developers

    ✅ Retrieves Actual Document Content – Unlike the Storage Read Node, which returns only the documentId, the Fetch Document Node generates a pre-signed URL for secure document access.

    ✅ Enables Secure & Temporary Access – The generated pre-signed URL is time-limited, ensuring that documents remain protected and cannot be accessed indefinitely.

    ✅ Seamless Integration with AI & Document Processing – Works alongside AI models, OCR tools, and document viewers, allowing for automated processing, manual review, or AI-driven extraction.

    By leveraging the Fetch Document Node, developers can efficiently retrieve, process, and display documents, making AI-driven workflows more powerful, secure, and intelligent. 🚀

    Workflow

    Overview

    The Workflow Node allows developers to attach an existing workflow as a subworkflow, enabling modularity, reusability, and simplified workflow management.

    By using subworkflows, developers can: ✅ Avoid Duplication – Use the same workflow across multiple parent workflows. ✅ Simplify Complex Workflows – Break down large workflows into manageable steps. ✅ Improve Maintainability – Make debugging and updates easier by isolating logic into reusable subflows.

    Configurations

    Field
    Description

    How the Workflow Node Works

    1️⃣ The parent workflow calls the selected subworkflow. 2️⃣ The input data is passed to the subworkflow for execution. 3️⃣ The subworkflow runs independently, processing its steps. 4️⃣ Once completed, the subworkflow returns the output to the parent workflow.

    Example Use-Cases

    Use-Case 1: Processing Invoices in a Finance Workflow

    A finance automation workflow requires invoice processing as a separate, reusable step.

    Configuration:

    Field
    Value

    Execution Process:

    1. The Process Invoice subworkflow is triggered with the input data.

    2. It validates the invoice, checks compliance, and logs the transaction.

    3. Once processed, it returns the final status to the parent workflow.

    🔹 Why Use a Subworkflow? ✔ Keeps the parent workflow clean and focused. ✔ Allows multiple workflows to reuse the invoice processing logic. ✔ Easy to update invoice handling without modifying multiple workflows.


    Use-Case 2: Loan Origination System

    A loan application workflow needs to verify applicant details through a separate KYC (Know Your Customer) process.

    Configuration:

    Field
    Value

    Execution Process:

    1. The Verify KYC subworkflow retrieves customer details.

    2. It performs ID verification and credit checks.

    3. The subworkflow returns an approval status to the parent workflow.

    🔹 Why Use a Subworkflow? ✔ Standardizes KYC processing across multiple workflows. ✔ Allows quick updates to KYC rules without affecting multiple workflows. ✔ Improves workflow organization by isolating verification logic.


    Use-Case 3: Data Enrichment Pipeline

    A data processing workflow needs to clean and enrich incoming customer records before storage.

    Configuration:

    Field
    Value

    Execution Process:

    1. The Data Cleansing subworkflow standardizes formats, removes duplicates, and enriches records.

    2. It validates email addresses, phone numbers, and other key fields.

    3. The subworkflow returns the cleaned record to the parent workflow for further processing.

    🔹 Why Use a Subworkflow? ✔ Reduces repetitive data validation logic across workflows. ✔ Allows independent testing and debugging of the data cleansing process. ✔ Keeps the parent workflow lightweight and focused on orchestration.


    Key Takeaways for Developers

    ✅ Promotes Reusability – Subworkflows allow reusing standard logic instead of duplicating it in multiple workflows.

    ✅ Improves Readability & Debugging – Complex logic is encapsulated in a subworkflow, making parent workflows easier to maintain.

    ✅ Facilitates Scalable Architecture – Updates to a subworkflow automatically reflect across all workflows using it.

    ✅ Supports Modular Execution – Subworkflows run independently, ensuring better organization and execution control.

    By leveraging the Workflow Node, developers can design scalable, modular, and maintainable workflows, ensuring that business processes remain adaptable and easy to manage. 🚀

    Vector Search

    Overview

    The Similarity Search Node enables AI workflows to retrieve contextually similar items based on vector representations rather than exact keyword matching. This is particularly useful for applications where meaning and semantic similarity matter more than specific words.

    Webhook

    Overview

    The Agent Webhook Feature allows third-party systems to send real-time notifications to an AI agent when specific events occur. This feature enables external applications to trigger workflows in response to external updates, such as when data processing is complete or new information is available.

    Display

    Overview

    The Display Node is designed to help developers show output messages or content to users during a conversation. It enables workflows to interact dynamically with users by presenting relevant information or actionable options.

    The Display Node supports multiple content types, including:

    • Plain text messages.

    Click Save—the snippet is now available for workflows.

    Contextualization: Providing relevant context can significantly enhance the quality of the output. This could include background information, specific examples, or desired outcomes. Example:

    • Instead of: "Summarize this text."

    • Use: "Summarize the following text as if you were explaining it to a high school student unfamiliar with the topic. Focus on key takeaways and avoid technical jargon."

  • Iterative Refinement: Don't expect perfection on the first try. Experiment with different phrasings, structures, and levels of detail. Analyze the results and refine your prompts accordingly.

    Example:

    1. Initial Prompt: "Generate ideas for a marketing campaign."

    2. Refined Prompt: "Generate three creative marketing campaign ideas for a new eco-friendly product targeting young adults, focusing on social media platforms."

    3. Further Refinement: "Generate three marketing campaign ideas for an eco-friendly water bottle targeting college students, incorporating Instagram and TikTok trends."

  • Role-Playing and Persona Adoption: Instruct the AI to adopt a specific role or persona. This can be particularly useful for creative writing, content generation, or simulating conversations.

    Example:

    • Instead of: "Explain cloud computing."

    • Use: "Explain cloud computing as if you're a tech journalist writing for a beginner audience."

    • Or: "Explain cloud computing as if you're a professor giving a lecture to computer science students."

  • Temperature Control: Many AI models have a "temperature" setting that controls the randomness of the output. Higher temperatures produce more creative and unpredictable results, while lower temperatures generate more focused and deterministic responses.

    Example:

    • Low Temperature (Focused Output): "Generate a step-by-step guide for setting up a home Wi-Fi network."

    • High Temperature (Creative Output): "Imagine a futuristic home Wi-Fi network. Describe how it works and its unique features."

  • System-Level Instructions: Some AI systems allow you to provide system-level instructions that guide the overall behavior of the model. This can be used to set the tone, establish constraints, or prioritize specific aspects of the task.

    Example:

    • "You are a helpful assistant specializing in financial planning. Provide concise and practical advice for budgeting for a family of four."

    • "Your task is to act as an expert proofreader. Correct grammatical errors while maintaining the original style and tone of the text."

  • Few-Shot Learning: Provide a few examples of the desired output format or style. This can help the AI model "learn" what you're looking for and generate more relevant responses.

    Example:

    • Prompt: "Generate a customer support response email. Here are two examples:

      1. 'Dear [Name], thank you for reaching out. We’ve received your request and will get back to you within 24 hours.'

      2. 'Hi [Name], thanks for contacting us. We’re looking into your issue and will provide an update shortly.' Now, write a response to a customer inquiring about a refund policy."

  • Chain-of-Thought Prompting: Encourage the AI to break down complex tasks into a series of smaller steps and articulate its thought process. This can lead to more accurate and insightful results.

    Example:

    • Instead of: "Solve this math problem: If a car travels 60 miles in 1.5 hours, what is its speed?"

    • Use: "Step-by-step, calculate the speed of a car that travels 60 miles in 1.5 hours. Start by identifying the formula for speed, then apply the numbers."

    • Output: "Step 1: The formula for speed is distance ÷ time. Step 2: The car travels 60 miles in 1.5 hours. Step 3: Speed = 60 ÷ 1.5 = 40 mph."

  • Workflow

    Select an existing workflow that will be attached and executed as a subworkflow.

    Input

    Define the input data to be passed to the subworkflow. This should match the expected input format of the selected subworkflow.

    Workflow

    Process Invoice

    Input

    { "invoiceId": "12345", "amount": "1000" }

    Workflow

    Verify KYC

    Input

    { "customerId": "A9876" }

    Workflow

    Data Cleansing

    Input

    { "recordId": "C10293" }

    Request Body:

    Source (Value Assigned to the Fact at Runtime)
    • This field defines where the value for the Fact should come from.

    • The value can be sourced from: ✅ A workflow variable (e.g., data stored at the agent level). ✅ The output of a previous node (e.g., fetched from a database). ✅ A static value (if needed).

    Ruleset

    Select a predefined Ruleset or create a new one using the "Add Ruleset" button.

    Mappings

    Define how runtime data maps to the Ruleset’s Facts for rule evaluation.

    Source (Runtime Value)

    Target (Ruleset Fact)

    $input.requestedLoanAmount

    LoanAmount

    $input.borrowerAge

    Age

    Ruleset

    Borrower Knockout Rules

    Mappings

    Source = $input.requestedLoanAmount

    Target = LoanAmount

    Source = $input.borrowerAge

    Target = Age

    Storage

    Select the pre-configured storage from where the document needs to be retrieved.

    Storage ID

    The unique identifier of the document in storage, generated by the Storage Write Node. This ID is required to fetch the associated documentId.

    Storage

    KYC Documents

    Storage ID

    fa5d0517-a479-49a5-b06e-9ed599f8e57a

    Storage

    Financial Statements

    Storage ID

    b85c7029-df3a-49ab-a45e-3bdfb79d6b7a

    Storage

    Invoices

    Storage ID

    c94a8223-ea5b-4cc5-b36f-7dcf54bfa2e4

    Document ID

    The unique identifier of the document, retrieved using the Storage Read Node. This ID is required to fetch the actual document.

    Document ID

    fa5d0517-a479-49a5-b06e-9ed599f8e57a

    Document ID

    b85c7029-df3a-49ab-a45e-3bdfb79d6b7a

    Document ID

    c94a8223-ea5b-4cc5-b36f-7dcf54bfa2e4

    Key Use-Cases

    ✅ Product Recommendations – Suggests similar products based on user searches or past interactions. ✅ Document Retrieval – Finds relevant research papers, legal documents, or articles based on semantic similarity. ✅ Semantic Search – Enhances search accuracy by retrieving results based on meaning, not just keywords. ✅ Chatbots & Virtual Assistants – Helps chatbots retrieve relevant responses from a knowledge base.

    How It Works

    1️⃣ The node queries a vector store to find items most similar to the input query. 2️⃣ The results are ranked by similarity and returned to the workflow. 3️⃣ Optional metadata filters can further refine the retrieved results.

    🔹 Example Use-Case: A shopping assistant AI recommends high-performance laptops similar to a user’s search query based on vector embeddings rather than just text matches.

    Configurations

    Field
    Description

    Vector Store

    Define the vector store that contains indexed data for similarity search. This store must be properly configured to ensure accurate retrieval results.

    Number of Candidates

    Set the maximum number of similar items to retrieve. The higher the number, the more options will be returned, but with potential trade-offs in relevance.

    Query

    The input query (e.g., product name, description, or text snippet) that will be compared against stored vector embeddings to find the most relevant matches.

    Filters (Optional)

    Apply metadata-based filters to refine search results by restricting retrieval to specific categories, attributes, or tags.


    Metadata Filtering & Its Importance

    Metadata filtering enhances retrieval accuracy and efficiency by allowing developers to limit search results based on specific attributes stored in the vector database.

    How Metadata Filtering Helps in Faster Querying:

    ✅ Narrows Down Search Scope – Instead of searching across all stored vectors, it retrieves only relevant subsets. ✅ Improves Precision – Ensures only relevant matches are returned by applying contextual constraints. ✅ Optimizes Query Performance – Reduces retrieval latency by limiting search operations to predefined categories.

    🔹 Example: If searching for high-performance laptops, metadata filters can restrict results to the “electronics” category, avoiding unrelated results from other domains.


    Execution Flow:

    1️⃣ The Similarity Search Node queries the vector store for semantically similar items. 2️⃣ If metadata filters are applied, only relevant matches are considered. 3️⃣ The node returns the retrieved documents, ranked by similarity.

    Output Format:

    Example Output for a Product Recommendation Query

    🔹 Why use this approach? ✔ Retrieves semantically similar results rather than just keyword matches. ✔ Enhances user experience by delivering more relevant recommendations.

    Example Use-Cases

    Use-Case 1: AI-Powered Product Recommendation System

    A shopping assistant AI suggests products similar to what a user is searching for, based on vector embeddings rather than exact keyword matches.

    Configuration:

    Field
    Value

    Vector Store

    product_vectors

    Number of Candidates

    2

    Query

    "high-performance laptop"

    Filters (Optional)

    { "category": "electronics" }

    Execution Process:

    1️⃣ The Similarity Search Node searches the vector store for products semantically similar to "high-performance laptop". 2️⃣ The system filters results to ensure only electronics are returned. 3️⃣ The most relevant product descriptions are retrieved and recommended to the user.

    Generated AI Response:


    Use-Case 2: AI-Powered Document Retrieval

    A research assistant AI helps users retrieve relevant research papers based on the meaning of their query, not just exact word matches.

    Configuration:

    Field
    Value

    Vector Store

    research_papers

    Number of Candidates

    3

    Query

    "latest advancements in quantum computing"

    Filters (Optional)

    { "year": { "$gte": 2022 } }

    Execution Process:

    1️⃣ The Similarity Search Node retrieves the most relevant research papers on quantum computing. 2️⃣ A metadata filter ensures that only papers published after 2022 are considered. 3️⃣ The retrieved papers are summarized and presented to the user.

    Generated AI Response:


    Use-Case 3: AI-Powered Semantic Search for Customer Support

    A customer support AI retrieves knowledge base entries similar to a user's question, improving chatbot response accuracy.

    Configuration:

    Field
    Value

    Vector Store

    support_knowledge_base

    Number of Candidates

    2

    Query

    "How do I reset my password?"

    Filters (Optional)

    { "category": "account_management" }

    Execution Process:

    1️⃣ The Similarity Search Node retrieves knowledge base articles related to password resets. 2️⃣ A metadata filter ensures that only "account management" articles are considered. 3️⃣ The chatbot retrieves the most relevant responses and displays them to the user.

    Generated AI Response:

    Key Takeaways for Developers

    ✅ Retrieves Contextually Similar Results Using Vectors – The Similarity Search Node matches queries based on meaning rather than exact keywords, making it ideal for recommendations, document retrieval, and semantic search.

    ✅ Supports a Wide Range of AI-Powered Applications – Can be used for product recommendations, knowledge base retrieval, legal research, customer support automation, and more.

    ✅ Uses Metadata Filtering for Faster and More Accurate Results – Developers can apply filters to narrow search results, improving precision and query performance.

    ✅ Enhances User Experience with More Relevant Suggestions – Whether in e-commerce, customer support, or research, the node provides results that closely match the user’s intent, not just keyword matches.

    ✅ Works with Any Configured Vector Store – The node seamlessly integrates with pre-configured vector stores, ensuring scalability and efficiency.

    By leveraging the Similarity Search Node, developers can build intelligent, high-accuracy retrieval systems that improve AI recommendations, enhance search results, and provide more personalized user experiences. 🚀

    Key Use-Cases

    ✅ Real-Time Notifications – Notify the agent when external systems complete processing. ✅ Workflow Automation – Trigger specific agent workflows via a webhook. ✅ Secure Communication – Uses HMAC-SHA256 signatures for payload authentication.

    🔹 Example: A document processing service uses a webhook to notify an AI agent once document extraction is complete, triggering a post-processing workflow in the AI agent.

    Webhook Setup & Configuration

    To enable webhook functionality, follow these steps:

    📌 Setting Up the Webhook in AI Workbench


    1️⃣ Navigate to AI Workbench

    • Open UPTIQ Workbench and select the AI agent that should handle webhook requests.

    • Go to the Triggers tab.

    2️⃣ Create a Webhook (If Not Already Configured)

    • Click on "Create Webhook" and select the workflow that should be executed when the webhook is triggered.

    • If no workflow exists for this webhook, create a new workflow.

    • Click Generate Private Key – this key will be used for signing webhook requests.

    3️⃣ Download the Private Key

    • If the webhook is already configured, click on "More Actions" → "Download Private Key".

    • The private key will be found in the downloaded JSON file.

    4️⃣ Copy the Webhook Endpoint

    • The webhook endpoint will be displayed in the Triggers tab.

    • This URL must be used when sending webhook requests from third-party applications.

    5️⃣ Trigger the Webhook (Third-Party Integration)

    • Use the webhook endpoint and private key to make a POST request with a signed payload.

    • The request must include an HMAC-SHA256 signature in the x-signature header.


    📌 Sending a Webhook Request

    🔹 Method: POST 🔹 URL: Webhook endpoint copied from the Triggers tab 🔹 Headers:

    • Content-Type: application/json

    • x-signature: <HMAC-SHA256 signature>


    📌 Generating the Signature (HMAC-SHA256)

    To ensure secure communication, each webhook request must include a signed payload using HMAC-SHA256.

    JavaScript Example to Generate a Signature

    🔹 How It Works: ✔ Converts the request body into a JSON string. ✔ Uses the private key to generate a SHA-256 HMAC signature. ✔ The generated signature must be included in the x-signature header when sending the request.

    Application of the Webhook

    The Agent Webhook Feature is useful in scenarios where an AI agent needs to respond to real-time external updates. Below are some key use cases:


    1️⃣ Document Processing Completion Notification

    ✔ A third-party OCR/extraction service processes a document and notifies the AI agent when extraction is complete. ✔ The webhook triggers an AI workflow that summarizes the extracted text and classifies important data.

    🔹 Example Workflow: 1️⃣ The OCR service completes document processing. 2️⃣ It sends a POST request to the AI agent’s webhook endpoint, including a document ID. 3️⃣ The AI agent retrieves the document, processes it, and stores relevant information in the database.


    2️⃣ CRM System Sync for Client Updates

    ✔ A CRM system (e.g., Salesforce) sends a webhook notification when client details are updated. ✔ The AI agent retrieves the latest client data and updates its internal records.

    🔹 Example Workflow: 1️⃣ A sales representative updates client details in the CRM. 2️⃣ The CRM system triggers a webhook to notify the AI agent. 3️⃣ The AI agent retrieves the updated data and refreshes the client summary widget.


    3️⃣ Fraud Detection Alert for Financial Transactions

    ✔ A fraud detection system sends a webhook when a suspicious transaction is flagged. ✔ The AI agent analyzes the alert and generates a risk assessment report.

    🔹 Example Workflow: 1️⃣ A transaction monitoring system detects fraud-like activity. 2️⃣ It triggers a webhook to notify the AI agent with transaction details. 3️⃣ The AI agent processes the data, applies risk rules, and sends an alert to the compliance team.


    4️⃣ AI-Powered Chatbot Receiving External Events

    ✔ A chatbot AI receives webhook notifications from an external support ticketing system. ✔ The AI agent updates the conversation context when ticket statuses change.

    🔹 Example Workflow: 1️⃣ A customer submits a support request through an external helpdesk system. 2️⃣ The helpdesk triggers a webhook when the ticket status is updated. 3️⃣ The AI agent notifies the user with real-time updates on ticket progress.


    5️⃣ Payment Processing Update

    ✔ A payment gateway sends a webhook notification when a payment is processed. ✔ The AI agent validates the payment and updates account balance records.

    🔹 Example Workflow: 1️⃣ A customer completes a payment via a payment processor. 2️⃣ The payment system triggers a webhook with payment confirmation details. 3️⃣ The AI agent updates the customer’s account, reflecting the new balance or transaction record.


    Key Benefits of Webhooks in AI Agents

    ✅ Enables real-time communication between AI agents and external systems. ✅ Automates workflows based on real-world triggers (e.g., document processing, payments, client updates). ✅ Ensures secure and authenticated webhook calls using HMAC-SHA256.

    Limitations of Webhooks

    While the Agent Webhook Feature provides real-time event-driven automation, it has certain limitations that developers should be aware of when implementing it


    No Built-in Retry Mechanism for Failed Webhooks

    ✔ If a webhook request fails due to a temporary issue (e.g., network failure, service downtime), the system does not retry the request automatically. ✔ The sender (third-party system) is responsible for implementing a retry mechanism if needed.

    🔹 Developer Note: If webhook reliability is a concern, design third-party integrations to handle retries in case of transient failures.


    Webhook Performance Depends on Workflow Execution Time

    ✔ Webhook-triggered workflows must complete processing efficiently to avoid slow response times. ✔ If a workflow takes too long to execute, it might cause delays in system updates.

    🔹 Developer Note: Optimize workflow logic to ensure that webhook-triggered tasks execute quickly and do not block other processes.


    Final Thoughts

    The Agent Webhook Feature is a powerful tool for event-driven automation, allowing AI agents to interact with external systems in real time. However, proper setup, security validation, and workflow optimization are essential to ensure reliable and efficient execution.

    jsonCopyEdit{
      "name": "Jane Doe",
      "email": "[email protected]",
      "role": "admin"
    }
    jsonCopyEdit{ "data": <API response> }
    jsonCopyEdit{ "error": <Error message> }
    jsonCopyEdit{
      "data": {
        "userId": 1,
        "id": 1,
        "title": "delectus aut autem",
        "completed": false
      }
    }
    jsonCopyEdit{
      "data": {
        "id": "12345",
        "name": "Jane Doe",
        "email": "[email protected]",
        "role": "admin"
      }
    }
    jsonCopyEdit{
      "error": "Unauthorized request. Invalid API key."
    }
    {
      "requestedLoanAmount": 150,
      "borrowerAge": 19
    }
    [
      {
        "event": "Check for Loan Amount",
        "params": {
          "loanAmount": "approved"
        }
      },
      {
        "event": "Check for Age",
        "params": {
          "age": "approved"
        }
      }
    ]
    {
      "documentId": "a0abf1d4-a4ca-459e-aada-b10947481b9c"
    }
    {
      "documentUrl": "https://storage.example.com/pre-signed-url"
    }
    {
      "retrievedDocs": [
        {
          "pageContent": "The Apple MacBook Pro, powered by the new Apple Silicon M3 Pro/Max processors, offers up to 64GB of unified memory. With exceptional build quality, long battery life, and seamless integration within the macOS ecosystem, it’s a standout high-performance choice for creative professionals, developers, and video editors."
        },
        {
          "pageContent": "The Dell XPS 15 comes with powerful processor options, including the Intel Core i9 (14th Gen) and AMD Ryzen 9, alongside up to 64GB of DDR5 RAM. This laptop features an OLED display, a lightweight design, and powerful GPU options, making it an ideal high-performance machine for content creators, software engineers, and those involved in multimedia editing."
        }
      ]
    }
    {
      "retrievedDocs": [
        {
          "pageContent": "The Apple MacBook Pro, powered by the new Apple Silicon M3 Pro/Max processors, offers up to 64GB of unified memory. With exceptional build quality, long battery life, and seamless integration within the macOS ecosystem, it’s a standout high-performance choice for creative professionals, developers, and video editors."
        },
        {
          "pageContent": "The Dell XPS 15 comes with powerful processor options, including the Intel Core i9 (14th Gen) and AMD Ryzen 9, alongside up to 64GB of DDR5 RAM. This laptop features an OLED display, a lightweight design, and powerful GPU options, making it an ideal high-performance machine for content creators, software engineers, and those involved in multimedia editing."
        }
      ]
    }
    {
      "retrievedDocs": [
        {
          "pageContent": "A 2023 study on quantum computing breakthroughs in superconducting qubits and error correction techniques."
        },
        {
          "pageContent": "A 2022 paper discussing the impact of quantum entanglement in secure communications and cryptographic algorithms."
        },
        {
          "pageContent": "A research paper exploring new quantum algorithms optimized for solving large-scale mathematical problems."
        }
      ]
    }
    {
      "retrievedDocs": [
        {
          "pageContent": "To reset your password, go to the login page, click 'Forgot Password,' and follow the instructions to receive a reset link."
        },
        {
          "pageContent": "If you're unable to reset your password using the standard method, contact support for further assistance."
        }
      ]
    }
    import crypto from 'crypto'; // Using Node.js crypto library
    
    function createSignature(requestBody, privateKey) {
        const jsonPayload = JSON.stringify(requestBody, null, 0).replace(/\n/g, '');
        // Generate the signature using the SHA-256 algorithm and the private key
        const signature = crypto.createHmac('sha256', privateKey).update(jsonPayload).digest('hex');
        return signature;
    }

    Postgres Database cluster.

    AI Agents use memory and state management to persistently track user interactions and task progress.

  • They can store user preferences, conversation history, and intermediate results to maintain context over long interactions.

  • Example: A loan origination AI agent (in financial services) remembers past document uploads, form fields, and verification statuses to guide users seamlessly through the application process.

  • AI Agents Can Interact with External Systems

    LLM-Based Apps: Self-Contained and Isolated

    • Traditional LLM-based applications lack direct integration with external systems.

    • They can generate text responses but cannot fetch real-time data or interact with APIs without additional engineering work.

    How AI Agents Solve This

    • AI Agents are designed to connect and interact with external databases, APIs, and software systems.

    • They act as middleware between users and backend systems, automating complex workflows.

    • Example: A loan origination AI agent retrieves live credit scores, bank statements, and loan application statuses via APIs, offering users real-time loan eligibility updates.

  • AI Agents Can Take Action, Not Just Generate Text

    LLM-Based Apps: Passive and Limited to Suggestions

    • LLM-based apps can only suggest what users should do next.

    • They cannot autonomously execute actions in real-world applications.

    How AI Agents Solve This

    • AI Agents have action execution capabilities, meaning they can send emails, book meetings, process transactions, or trigger workflows.

    • They integrate with external services to perform real-world tasks.

    • Example: A loan origination AI agent fills out application forms, schedules document verification meetings, and submits applications on behalf of the user, rather than just guiding them manually.

  • Handling Multi-Step Tasks with Intelligent Workflows

    LLM-Based Apps: Struggle with Multi-Step Processes

    • LLM-based apps work best with single-step, short-turn interactions.

    • Complex, multi-step workflows (e.g., submitting a loan application, verifying income, finalizing approval) require manual intervention.

    How AI Agents Solve This

    • AI Agents break down complex tasks into sub-tasks, ensuring step-by-step execution.

    • They incorporate decision-making logic to adjust dynamically based on user inputs and external conditions.

    • Example: A loan processing AI agent handles a multi-step verification by:

      1. Asking the user for required documents.

      2. Extracting data via OCR and validating financial statements.

      3. Checking loan eligibility via an integrated credit check API.

  • Formatting and returning the response to the user.

    Suggestions for quick replies.

  • Clickable links.

  • Charts (using Chart.js V4).

  • Report summaries.

  • Summary grids.

  • Tables with structured data.

  • This versatility ensures that the Display Node is adaptable for a variety of user-facing scenarios, from simple notifications to complex data visualizations.

    Configurations

    1. Type: Defines the type of content to be displayed. Options include:

      • Text: Displays a plain text message.

      • Suggestions: Displays a list of quick reply options.

      • Link: Includes a clickable link with custom text and URL.

      • Chart: Displays data visualizations using Chart.js configurations.

      • Report Summary: Summarizes content with rich text and charts.

      • Summary Grid: Displays a grid-like layout for summary items.

      • Table: Presents structured data in tabular format.

    2. Text: The plain text message to be shown to the user.

    3. Suggestions: A list of actionable suggestions or replies for the user to select from.

    4. Link:

      • Link Text: The displayed text for the clickable link.

      • Link URL: The URL that the link navigates to.

    5. Chart:

      • Chart Label: Title for the chart.

      • Chart Config: Chart.js V4-compatible configuration object.

      • Height/Width: Optional dimensions for the chart.

    6. Report Summary:

      • Summary Label: Title for the summary.

      • Summary Content: JSON array with summary items (charts, rich text, etc.).

    7. Summary Grid:

      • Summary Label: Title for the grid.

      • Summary Content: JSON array with summary items in grid format.

    8. Table:

      • Table Label: Title for the table.

      • Columns: JSON array defining table columns.

      • Data: JSON array containing rows of table data.

    Example Use-Cases

    Example 1: Use of Display Node Text Type & Link Type. In this example, the Display Node is configured to present a clickable link to the user in a workflow. This setup demonstrates how the Link Type can be effectively used to deliver actionable content or provide access to external resources in a conversation. Workflow Steps:

    1. Fetch Document Node:

      • The workflow begins with the Fetch Document Node, which retrieves a financial contract file and provides its URL as part of the output.

    2. Show Clickable Link Node:

      • The Display Node is configured with the Link Type to present the user with a clickable link.

      • Link Text: The text displayed to the user is labeled as "Contract Summary".

      • Link URL: The URL is dynamically retrieved from the previous node’s output using the variable $input.url.

    3. Show Message Node:

      • A Text Type Display Node follows, informing the user: "Use the link above to download the summary."

      • This ensures the user understands the purpose of the link provided.

    User Interaction in Chat Interface:

    • The user initiates the workflow by uploading a financial contract.

    • The system processes the file and displays the clickable link with the text "Contract Summary".

    • Upon clicking, the user is redirected to download the summary.

    • A subsequent message guides the user about the action, improving clarity.

    In this example, the Link Type and Text Type are used in tandem to deliver a seamless user experience:

    • The Link Type provides the user with direct access to a dynamically generated resource, such as the contract summary. This clickable link is clear and actionable, enabling the user to immediately download the required file.

    • The Text Type complements the link by offering a simple, guiding message that explains the purpose of the link and directs the user on the next steps.

    Together, these types ensure that the user interaction is both intuitive and informative, with actionable links supported by clear contextual guidance. This combination highlights how different Display Node types can work together to enhance user engagement within a workflow.

    Key Takeaways for Developers

    ✅ Versatility in Output: The Display Node can handle a wide range of content types, making it ideal for diverse use cases.

    ✅ User Engagement: Suggestions, links, and charts improve user interactivity and experience during conversations.

    ✅ Structured Information Delivery: Tables and summary grids present complex data in an organized manner, enhancing readability.

    ✅ Customization and Flexibility: Chart configurations, summary layouts, and export options allow developers to tailor the display to specific needs.

    ✅ Simple Configuration: With just a few fields to configure, the Display Node integrates seamlessly into workflows.

    Table Read

    Overview

    The Table Read Node is the counterpart to the Table Write Node, allowing developers to retrieve data from a persistent table within the UPTIQ AI Workbench. This node is essential for workflows that require fetching stored records, applying filtering conditions, and selecting specific fields before processing further.

    Unlike a traditional database query tool, this node enables structured data retrieval within AI-driven workflows, ensuring that the output can be dynamically used in subsequent nodes, such as AI processing, user responses, or workflow decision-making.

    Configurations

    Field
    Description

    Filters (Optional) – Using NoSQL Query Syntax

    The Table Read Node uses NoSQL-style filtering, similar to MongoDB query syntax, to retrieve specific records based on conditions. Filters must be structured as JSON objects with field names as keys and operators as values.

    Common NoSQL Query Operators Supported

    Operator
    Description
    Example

    Example Filter Configurations

    1. Fetching Approved Loan Applications

    2. Retrieving Transactions Greater Than $500

    3. Fetching Users Between Ages 25 and 40

    How Each Configuration Works

    1. Table

      • Specifies the Table to query.

      • Example: Users, Transactions, LoanApplications.

    Output Format

    • The result is always returned as an array of objects under the "data" key:

    Example Use-Cases

    1. Fetching Users Above a Certain Age

    A workflow needs to retrieve users who are 25 years or older for a targeted AI campaign.

    • Configuration:

      • Table: Users

      • Filters: { "age": { "$gte": 25 } }


    2. Retrieving Pending Loan Applications

    A financial agent needs to fetch all loan applications that are pending review.

    • Configuration:

      • Table: LoanApplications

      • Filters: { "status": "pending" }


    3. Fetching Recent Transactions for a User

    A customer service workflow needs to fetch the latest 5 transactions for a user with ID "U12345".

    • Configuration:

      • Table: Transactions

      • Filters: { "userId": "U12345" }

    Key Takeaways for Developers

    ✅ Efficient Data Retrieval – Fetch only the necessary data using Filters and Projections, ensuring optimized workflow execution.

    ✅ Supports Complex Queries – Use JSON-based filtering for range-based, conditional, or status-specific data retrieval.

    ✅ Seamless Integration with AI Workflows – The output can be passed to AI nodes, summary nodes, or user interaction nodes for real-time decision-making.

    ✅ Structured JSON Output for Easy Processing – Always returns results in a standardized format, making it easy to consume by subsequent nodes.

    By leveraging the Table Read Node, developers can extract meaningful insights from persistent storage, ensuring that AI agents, automation workflows, and business processes operate with real-time and structured data. 🚀

    Storage Write

    Overview

    The Storage Write Node enables developers to store files in a structured way within the Document Storage system. This is particularly useful for organizing and managing important documents collected during workflow execution.

    By using this node, developers can: ✅ Store uploaded documents in a pre-configured storage category. ✅ Ensure better organization by separating different types of documents (e.g., Identity Proofs, Financial Statements, Bank Statements). ✅ Retrieve stored documents later using the storageId returned by this node.

    Common Workflow Pattern for Storage Write Usage

    1️⃣ Accept documents in a workflow using the Upload Document Node → This generates a documentId. 2️⃣ Use the Storage Write Node to save the document under a specific storage category. 3️⃣ Store the returned storageId in Tables (or an external database) for future reference.

    🔹 Example Use-Case: In a loan application processing system, AI agents collect KYC documents such as ID proofs, address proofs, and financial statements. These documents can be stored in separate storage categories for better accessibility and organization.

    Configurations

    Field
    Description

    Execution Flow:

    1️⃣ The Storage Write Node receives a documentId (generated by the Upload Document Node). 2️⃣ The document is saved under the selected storage category. 3️⃣ The node returns a storageId, which serves as a reference for future retrieval.

    Output Format:

    • storageId → A unique identifier that can be used later to fetch the stored document.

    Example Use-Cases

    Use-Case 1: Storing KYC Documents for Loan Applications

    A loan application workflow requires users to submit KYC documents (e.g., Passport, Address Proof, Bank Statements). These documents need to be stored in specific storage categories for structured management.

    Configuration:

    Field
    Value

    Execution Process:

    1️⃣ User uploads their passport as part of the loan application process. 2️⃣ The Upload Document Node returns a documentId. 3️⃣ The Storage Write Node saves the document under the KYC storage category. 4️⃣ The node returns a storageId, which is then stored in Tables or an external database against the loan application.

    🔹 Why use this approach? ✔ Keeps KYC documents structured for each applicant. ✔ Ensures secure, organized storage for future verification. ✔ Allows retrieval of stored documents during loan approval or audits.


    Use-Case 2: Managing Financial Statements for Business Loans

    A business loan application process collects financial statements (Balance Sheets, Profit & Loss Statements, etc.), which need to be stored separately for compliance.

    Configuration:

    Field
    Value

    Execution Process:

    1️⃣ The applicant uploads their company’s financial statements. 2️⃣ The Upload Document Node generates a documentId. 3️⃣ The Storage Write Node saves the document under the Financial Statements category. 4️⃣ The generated storageId is stored against the business loan record for future reference.

    🔹 Why use this approach? ✔ Ensures regulatory compliance by keeping business financials structured. ✔ Facilitates quick retrieval during underwriting or risk assessment. ✔ Enhances security by categorizing different document types properly.


    Use-Case 3: Archiving Processed Invoices in Document Storage

    A company’s invoice processing workflow requires invoices to be stored systematically for audit and compliance.

    Configuration:

    Field
    Value

    Execution Process:

    1️⃣ A supplier uploads an invoice for payment processing. 2️⃣ The Upload Document Node generates a documentId. 3️⃣ The Storage Write Node saves the document in the Invoices storage. 4️⃣ The storageId is stored in the finance system for future reconciliation.

    🔹 Why use this approach? ✔ Creates a structured archive of financial documents. ✔ Ensures easy tracking of invoices for audit compliance. ✔ Streamlines retrieval when verifying payments or resolving disputes.

    Key Takeaways for Developers

    ✅ Structured Document Management – Allows developers to store documents in pre-configured storage categories, ensuring better organization and retrieval.

    ✅ Seamless Integration with Workflows – Works in conjunction with the Upload Document Node to facilitate a complete document processing pipeline.

    ✅ Improves Data Consistency – The generated storageId can be stored in Tables or an external database, ensuring document traceability in future workflow executions.

    ✅ Flexible Storage Path Configuration – Developers can define custom storage paths, enabling logical separation of KYC files, financial documents, invoices, and more.

    ✅ Supports Compliance and Auditing – By categorizing and structuring document storage, this node helps maintain audit trails and regulatory compliance for critical document handling processes.

    By leveraging the Storage Write Node, developers can efficiently store, categorize, and retrieve documents across different workflow scenarios, ensuring seamless automation and better document lifecycle management. 🚀

    Output

    Overview

    The Output Node in UPTIQ Workbench allows workflows to display messages within a conversation, either for internal processing or user-facing communication. This node helps control what information is presented, to whom, and in what format, ensuring clarity in interactions and internal workflow execution.

    By leveraging the Output Node, developers can: ✅ Guide user interactions by displaying status updates, confirmations, or responses. ✅ Provide internal reasoning context without exposing information to users. ✅ Format and structure responses dynamically within AI workflows.

    Submitting the final credit memo for approval from human reviewers.

    Enable Export: Option to allow data export in CSV format.

    Create a RAG Container in UPTIQ AI Workbench.
    How to create a data store.

    $lt

    Matches values less than the given number.

    { "loanAmount": { "$lt": 50000 } }

    $lte

    Matches values less than or equal to the given number.

    { "loanAmount": { "$lte": 100000 } }

    $in

    Matches values in a specified list.

    { "status": { "$in": ["pending", "under review"] } }

    $nin

    Matches values not in a specified list.

    { "status": { "$nin": ["rejected", "closed"] } }

    Filters (Optional)

    • Used to narrow down the data retrieval by applying conditions.

    • Example: Fetching users aged 25 or older

  • Projection (Optional)

    • Controls which fields should be included or excluded in the result.

    • Example: Only retrieving name and age, excluding _id:

  • Projection: { "name": 1, "age": 1, "_id": 0 }
  • Output:

  • How it's used: The workflow processes these users for personalized AI-driven recommendations.

  • Projection: { "applicantName": 1, "loanAmount": 1, "_id": 0 }
  • Output:

  • How it's used: The agent can display this data in a summary table and trigger review workflows.

  • Projection: { "transactionId": 1, "amount": 1, "status": 1, "_id": 0 }
  • Output:

  • How it's used: This data is summarized and presented to the support agent during a live chat.

  • Table

    Select the table from which data will be retrieved.

    Filters (Optional)

    Define a JSON filter using NoSQL query syntax to apply conditions when querying data. Leave empty to fetch all records.

    Projection (Optional)

    Define which fields should be included or excluded in the output. Helps in optimizing data retrieval.

    $eq

    Matches an exact value.

    { "status": { "$eq": "approved" } }

    $ne

    Matches values not equal to the given value.

    { "status": { "$ne": "rejected" } }

    $gt

    Matches values greater than the given number.

    { "age": { "$gt": 30 } }

    $gte

    Matches values greater than or equal to the given number.

    { "age": { "$gte": 25 } }

    Storage

    Select the pre-configured storage where the file will be stored. This ensures documents are organized under specific categories.

    Document ID

    The unique document identifier received from the Upload Document Node. This is the input to the Storage Write Node.

    Storage Path

    Define the specific path within the storage where the document should be placed. This helps in better structuring and retrieval.

    Storage

    KYC Documents

    Document ID

    fa5d0517-a479-49a5-b06e-9ed599f8e57a

    Storage Path

    kyc/user_12345/

    Storage

    Financial Statements

    Document ID

    b85c7029-df3a-49ab-a45e-3bdfb79d6b7a

    Storage Path

    business_loans/applicant_5678/

    Storage

    Invoices

    Document ID

    c94a8223-ea5b-4cc5-b36f-7dcf54bfa2e4

    Storage Path

    invoices/processed/

    Configurations
    Field
    Description

    Type

    Determines who sees the message. Options: External (visible to users) or Internal (for reasoning engine only).

    Text

    The actual message that will be displayed.

    Formatting Instructions

    Determines how the message should be structured. Example: Preserve formatting as provided by the user.

    1. Type

    Internal Messages

    The Internal Message Type in the Output Node is a powerful feature that allows developers to create a chain effect of user queries. When an Internal Message is generated, the output of this node is automatically passed back to the Intent Classification system, triggering a new round of sub-agent classification, intent identification, and workflow execution—just as if the user had manually entered the message.

    How It Works

    1. The Output Node generates an Internal Message instead of displaying it to the user.

    2. This message is fed back into the AI agent, acting as a new user query.

    3. The Intent Classification system processes the message and routes it to the appropriate sub-agent and workflow.

    4. The AI executes the next set of actions proactively, reducing the need for additional user input.

    Why This Matters for AI Workflow Design

    ✅ Creates a Proactive AI Agent

    • AI agents don’t always need to wait for a user’s next query.

    • Instead, they can predict logical next steps and execute them automatically.

    ✅ Reduces User Input for Multi-Step Processes

    • Users don’t need to enter repetitive queries.

    • AI can generate follow-up queries dynamically, making interactions more efficient.

    ✅ Enables Smarter Flow Execution

    • AI auto-generates queries based on user intent and retrieved data.

    • This makes workflows more intelligent and context-aware.

    External Messages

    • Displayed to the user as part of the conversation.

    • Used for providing responses, confirmations, or next steps.

    • Example: "Thank you for your patience. Your report is now ready."

    1. Text Enter the message that should be displayed to the user or forwarded to the intent classification module of the reasoning engine

    2. Formatting Instructions Specify your formatting guidelines in this field. Workbench will use these instructions to be applied during output generation to customize the format and include any additional information needed.

    Output Format

    After execution, the message is displayed based on its type. For External Messages, the user will see:

    For Internal Messages, the AI will receive the string output based on what is set in Text field.

    Example Use-Cases

    1. Notifying Users About Report Generation

    A workflow generates a financial report and needs to notify the user when it's ready.

    • Configuration:

      • Type: External

      • Text: "Thank you for your patience. Your report is now ready."

      • Formatting Instructions: Preserve formatting provided by user.

    • Outcome: The user receives a message confirming report availability.

    2. Providing Internal Instructions to AI Reasoning Engine

    A workflow processes a loan application and requires an internal update for reasoning.

    • Configuration:

      • Type: Internal

      • Text: "Processing loan application with ID: LA123456."

    • Outcome:

      • The AI receives contextual guidance without exposing this message to the user.

    3. Auto-Follow-Up on a Loan Application Status

    Scenario: A user asks: "What is the status of my loan application?" The workflow fetches the loan status and, if it’s pending, generates a follow-up query to suggest additional actions.

    • Configuration:

      • Type: Internal

      • Text: "The user’s loan application is still pending. Ask them if they want to connect with a loan officer for assistance."

    • Outcome:

      • Instead of requiring the user to ask "Can I connect with a loan officer?", the AI proactively generates this question.

      • The new query goes through intent classification, triggering a workflow to offer an appointment booking option.

    4. Intelligent Next-Step Execution in Financial Reports

    Scenario: A user requests: "Show me my last 3 transactions." Once transactions are retrieved, the AI automatically asks if the user wants further insights, such as categorizing spending trends.

    • Configuration:

      • Type: Internal

      • Text: "User requested their last 3 transactions. Generate a query to analyze spending patterns and show category-wise breakdown."

    • Outcome:

      • Instead of waiting for the user to ask for a spending analysis, the AI triggers the next logical step proactively.

      • The Intent Classification system processes the new query and runs the appropriate workflow.

    Key Takeaways for Developers

    ✅ Control Information Visibility – Decide whether messages should be visible to users (External) or restricted to AI reasoning (Internal).

    ✅ Enhance User Experience – Use External Messages to provide real-time updates, confirmations, or guided interactions.

    ✅ Improve Workflow Debugging & Context Awareness – Use Internal Messages to log key workflow steps and guide AI behavior.

    ✅ Customizable Formatting – Messages can preserve user-provided formatting, ensuring structured communication.

    By integrating the Output Node, developers can improve the clarity, control, and effectiveness of AI-driven conversations, enhancing both user experience and internal process efficiency. 🚀

    { 
        "age": { "$gte": 25 } 
    }
    { 
        "name": 1, "age": 1, "_id": 0 
    }
    jsonCopyEdit{
      "data": [
        { "name": "Alice", "age": 25 },
        { "name": "Bob", "age": 30 }
      ]
    }
    jsonCopyEdit{
      "data": [
        { "applicantName": "John Doe", "loanAmount": 50000 },
        { "applicantName": "Jane Smith", "loanAmount": 75000 }
      ]
    }
    jsonCopyEdit{
      "data": [
        { "transactionId": "T001", "amount": 150, "status": "completed" },
        { "transactionId": "T002", "amount": 300, "status": "pending" }
      ]
    }
    { "status": { "$eq": "approved" } }
    { "amount": { "$gt": 500 } }
    { "age": { "$gte": 25, "$lte": 40 } }
    { 
        "data": any[] 
    }
    {
      "storageId": "fa5d0517-a479-49a5-b06e-9ed599f8e57a"
    }
    Thank you for your patience. Your report is now ready.

    RAG Query

    Overview

    The Vector Search (RAG) Node enhances LLM-generated responses by retrieving relevant information from a specified RAG container before formulating an answer. This retrieval-augmented generation (RAG) approach allows AI models to generate factually accurate, up-to-date, and context-aware responses, making it ideal for knowledge-based applications such as:

    ✅ Customer Support Assistants – Retrieve documentation and past resolutions to provide accurate troubleshooting. ✅ AI-Powered Documentation Search – Enhance LLM responses by retrieving technical guides, user manuals, and FAQs. ✅ Enterprise Knowledge Management – Search through internal databases and return relevant company policies, reports, and guidelines. ✅ Personalized Recommendations – Retrieve historical user interactions to customize AI-generated responses.

    How It Works

    1️⃣ Retrieves relevant document embeddings from the RAG container. 2️⃣ Enriches the user query with retrieved data before passing it to an LLM. 3️⃣ Processes the combined input using the selected AI model. 4️⃣ Returns an AI-generated response based on both retrieved context and model reasoning.

    🔹 Example Use-Case: A technical support chatbot retrieves documentation on network errors before generating a troubleshooting response for a user.

    Configurations

    Field
    Description

    Execution Flow:

    1️⃣ The Vector Search Node queries the RAG container for relevant context. 2️⃣ The retrieved documents are used to enrich the user’s query before sending it to the LLM. 3️⃣ The LLM processes the query + retrieved information, ensuring the response is grounded in factual data. 4️⃣ The node returns a response along with source citations when applicable.

    Output Format:

    Plain Text Response (Default)

    Example Use-Cases

    Use-Case 1: AI-Powered Knowledge Base for IT Support

    A technical support chatbot retrieves relevant troubleshooting guides from a RAG container before generating AI-powered responses.

    Configuration:

    Field
    Value

    Example User Query:

    💬 "How do I fix a 502 Bad Gateway error on my web server?"

    Generated AI Response:


    Use-Case 2: AI-Powered Legal Document Search

    A legal AI assistant retrieves relevant contract clauses before summarizing legal documents.

    Configuration:

    Field
    Value

    Generated AI Response:


    Use-Case 3: Personalized Financial Advisory

    A financial AI assistant retrieves historical investment strategies before generating personalized recommendations.

    Configuration:

    Field
    Value

    Generated AI Response:

    Key Takeaways for Developers

    ✅ Enhances LLM Accuracy with Contextual Retrieval – The Vector Search (RAG) Node ensures that AI-generated responses are grounded in real data by retrieving relevant documents from RAG containers before processing the query.

    ✅ Supports Knowledge-Based AI Applications – Ideal for customer support chatbots, documentation search, legal research, and financial advisory, where contextual accuracy is crucial.

    ✅ Retrieves and Enriches Information Before AI Processing – Unlike a standard Prompt Node, this node first retrieves relevant documents before sending the enriched query to the LLM, improving relevance and factual correctness.

    ✅ Flexible Configuration for Structured Responses – Developers can choose between Plain Text or JSON response formats, making it suitable for both conversational AI and structured data extraction.

    ✅ Includes Metadata Filtering for Targeted Retrieval – Supports filters on document metadata, allowing developers to fine-tune retrieval and avoid irrelevant results.

    ✅ Ensures Traceability with Source Citations – Responses include document sources, making it easier for users to verify where the information came from, increasing AI reliability and trust.

    By leveraging the Vector Search (RAG) Node, developers can integrate knowledge-aware AI assistants that provide fact-based, personalized, and domain-specific responses, transforming workflows into intelligent, data-driven systems. 🚀

    Widget

    Overview

    Widgets in UPTIQ AI Workbench are modular UI components that enhance the functionality of AI agents by embedding custom interactive elements. They allow developers to create custom interfaces that integrate seamlessly within agent workflows, enabling enhanced interactivity, data visualization, and workflow automation.

    What is a Widget?

    A widget in UPTIQ AI Workbench is a reusable web component that extends the capabilities of AI agents. It can be a simple UI element like a button or a complex component that interacts with backend workflows, listens to events, and processes user input.

    How Widgets Work in UPTIQ AI Workbench

    1. Creation: Developers create widgets using React, Tailwind CSS, and Shadcn UI.

    2. Exporting: Widgets are converted into web components using @r2wc/react-to-web-component.

    3. Hosting: Built widget bundles are hosted and linked in the agent configuration.

    How to add a custom widget?

    Important Note

    It is mandatory to use only the following libraries/packages to develop a component:

    • React -

    • Shadcn UI -

    • Tailwind CSS -

    • @r2wc/react-to-web-component - for converting React components to web components -

    Prerequisites

    • Node.js and Yarn installed

    • Basic knowledge of React and web components

    • Access to the UPTIQ AI Workbench

    Step 1: Set Up the Custom Widget Project

    1. Download the starter project from custom-widget-starter

    2. Install dependencies

    1. The entry point for widgets is src/index.ts. This file exports all widgets as web components.

    Step 2: Create a New Widget

    1. Create a new file: src/widgets/secondWidget/SecondWidget.tsx.

    2. Add a React component:

    1. Create an index.ts inside src/widgets/secondWidget/ for better organization:

    1. Update src/index.ts to register the new widget:

    Step 3: Build and Deploy the Widget

    1. Build the project:

    2. Host the dist/ folder bundle in the cloud.

    3. Use dist/index.js in the AI Workbench as a script:

    Step 4: Configure Custom Widget in UPTIQ AI Workbench

    1. Navigate to the Widgets tab in the AI Agent config page.

    2. Click on Add Custom Widget.

    3. Fill in the required details:

      • Name: Widget name

    Step 5: Handling Events in Custom Widgets

    • Use useSamuelEventListener to listen for custom events:

    Step 6: Running Workflows from a Widget

    Call the workflow execution API:

    Step 7: Testing Widgets Locally

    1. Update src/development/constants.ts with test values.

    2. Modify index.html to include the widget:

    1. Start the local development server:

    Conclusion

    Following these steps, developers can create, configure, and integrate custom widgets into the UPTIQ AI Workbench to enhance AI agent capabilities.

    Widgets in UPTIQ Workbench dynamically update based on events triggered within workflows. The Emit Event Node plays a crucial role in this interaction by allowing workflows to send real-time updates to UI components.

    ✅ Why It Matters?

    • The Emit Event Node ensures that widgets always display the latest data without manual refresh.

    Prompt

    Overview

    The Prompt Node serves as a direct interface between UPTIQ Workbench workflows and Large Language Models (LLMs), enabling AI-powered interactions. This node allows developers to send prompts to an LLM model, receive responses, and optionally process documents as part of the AI request.

    Emit Event

    Overview

    The Emit Event Node in UPTIQ Workbench enables workflows to communicate with UI components, trigger downstream actions, or notify other system components dynamically.

    This node is particularly useful in live applications, dynamic dashboards, and chatbot systems, where events need to be propagated instantly to ensure seamless updates and actions.

    By using the Emit Event Node, developers can: ✅ Trigger UI updates dynamically – Refresh widgets, update dashboards, or modify chatbot interfaces. ✅ Notify workflows when specific conditions are met – Signal new document uploads, user interactions, or background process completions. ✅

    External Database

    Overview

    The External Database Node in UPTIQ Workbench enables developers to connect their workflows to external databases of their choice for persistent storage, data retrieval, and data manipulation. This node offers an alternative to the Tables feature in UPTIQ by providing a generic interface for interacting with a variety of external databases, including MongoDB, SQL, PostgreSQL, Oracle, and BigQuery.

    With the External Database Node, developers can: ✅ Perform CRUD operations (Create, Read, Update, Delete) across supported databases. ✅ Integrate existing external data sources into AI agent workflows. ✅ Store and retrieve data from custom databases

    Number of Conversation Turns

    Defines how much context from past interactions should be retained for better continuity.

    RAG Container

    Select the RAG container that stores relevant documents or indexed knowledge. This acts as the source of retrieved context.

    System Prompt

    Define instructions that guide the model’s behavior when generating a response. Similar to the Prompt Node, this ensures responses follow a specific format and tone.

    Query

    The user input that will be enriched with retrieved information before being processed by the LLM. Can be dynamically set using $agent.query.

    Filters (Optional)

    Apply metadata filters to narrow down retrieval results (e.g., filter documents by category, tag, or source). These filters must be configured in the RAG datastore as well.

    Response Format

    Choose between: Plain Text (default) for natural language responses or JSON for structured outputs. JSON format is recommended when structured data needs to be extracted.

    Temperature

    Adjusts the randomness of responses: Lower values (e.g., 0.1) → More predictable outputs, Higher values (e.g., 0.9) → More creative outputs.

    RAG Container

    it_support_docs

    System Prompt

    "You are an IT support assistant helping users troubleshoot common technical issues. Provide clear, step-by-step guidance based on retrieved documentation. If no relevant information is found, recommend escalating the issue to support."

    Query

    $agent.query (automatically retrieves the user’s question)

    Response Format

    Plain Text

    Temperature

    0.3

    Number of Conversation Turns

    2

    RAG Container

    legal_documents

    System Prompt

    "You are an AI legal assistant. Retrieve and summarize relevant clauses from legal contracts. If no relevant clause is found, state so clearly."

    Query

    "What are the termination conditions for this contract?"

    Response Format

    JSON

    RAG Container

    investment_strategies

    System Prompt

    "You are an AI financial advisor. Retrieve past investment strategies based on the user's profile and suggest a personalized plan."

    Query

    "What’s the best investment plan for someone with a high-risk appetite?"

    Response Format

    Plain Text

    Integration
    : The widget is embedded within the agent’s UI and interacts with workflows.
  • Event Listening & Execution: Widgets can listen to custom events and trigger workflows accordingly.

  • Description: Short description

  • Events: Custom events for workflow interactions

  • HTML Content: Include the script to load the widget

  • Click Save.

  • Approve the widget and toggle it on for use.

  • Events like Refresh Documents, Refresh Client Summary, and Open Document Uploader allow seamless updates to respective widgets.
  • Developers can integrate workflows with widgets by triggering the appropriate Emit Event, ensuring a responsive and interactive user experience.

  • Want to learn more? Check the Emit Event Node documentation for a full list of supported events and how to configure them in your workflow. 🚀

    https://react.dev
    https://ui.shadcn.com/docs
    https://tailwindcss.com/
    https://www.npmjs.com/package/@r2wc/react-to-web-component
    https://drive.google.com/drive/folders/14JVnBlhEijOan1AxGA6IhjicsBMjSi8d
    {
      "content": "A 502 Bad Gateway error often indicates a communication issue between servers. Here are some troubleshooting steps: 1. Restart your web server and proxy server. 2. Check your server logs for connection errors. 3. Verify DNS settings and firewall rules. 4. If using a cloud provider, check for outages. If the issue persists, contact your hosting provider for further assistance.",
      "sources": [
        { "title": "502 Error Troubleshooting Guide", "url": "https://docs.company.com/errors/502" }
      ],
      "llmQuery": "How do I fix a 502 Bad Gateway error on my web server?"
    }
    {
      "content": "A 502 Bad Gateway error often indicates a communication issue between servers. Here are some troubleshooting steps: 
      1. Restart your web server and proxy server. 
      2. Check your server logs for connection errors. 
      3. Verify DNS settings and firewall rules. 
      4. If using a cloud provider, check for outages. If the issue persists, contact your hosting provider for further assistance.",
      "sources": [
        { "title": "502 Error Troubleshooting Guide", "url": "https://docs.company.com/errors/502" }
      ],
      "llmQuery": "How do I fix a 502 Bad Gateway error on my web server?"
    }
    {
      "content": "The termination clause states that either party may terminate the contract with a 30-day notice. Early termination may incur a penalty of 15% of the remaining contract value.",
      "sources": [
        { "title": "Sample Contract - Termination Clause", "url": "https://docs.company.com/legal/contracts/termination" }
      ]
    }
    {
      "content": "Based on historical investment strategies, high-risk investors have benefited from a diversified portfolio that includes 60% stocks, 30% crypto assets, and 10% bonds. However, individual risk factors should be considered before making investment decisions.",
      "sources": [
        { "title": "High-Risk Investment Strategies", "url": "https://docs.company.com/finance/investments/high-risk" }
      ]
    }
    yarn
    export const SecondWidget = () => {
      return (
        <div>
          // Your component code goes here.
        </div>
      );
    };
    export { SecondWidget } from "./SecondWidget";
    import { SecondWidget } from "./widgets/secondWidget";
    
    const widgets = [
      { tag: "first-widget", component: FirstWidget },
      { tag: "second-widget", component: SecondWidget },
    ];
    
    widgets.forEach(registerWidgetAsWebComponent);
    yarn build
    <script src="<hosted-base-url>/index.js" type="module"></script>
    <second-widget></second-widget>
    const handleEvent = useCallback((eventData: any) => {
      console.log(eventData);
    });
    
    useSamuelEventListenr("test-event", handleEvent);
    const handleRunWorkflow = async (taskInputs: any) => {
      const executionId = uuid();
      const { uid } = getSamuelUser();
      const { appId, serverUrl, widgetKey } = getSamuelConfig();
      
      const workflowId = taskInputs?.workflowId;
      if (!workflowId) throw new Error("workflowId is required");
      
      const response = await axios.post(
        `${serverUrl}/workflow-defs/run-sync`,
        { executionId, uid, integrationId: workflowId, appId, taskInputs },
        { headers: { widgetKey, appid: appId } }
      );
      console.log(response.data);
    };
    <script type="module" src="/src/index.ts"></script>
    <div style="width: 400px">
      <second-widget></second-widget>
    </div>
    yarn dev
    Key Capabilities:

    ✅ Enables text generation, summarization, and structured AI responses. ✅ Supports custom system prompts to define LLM behavior and response style. ✅ Accepts document attachments (documentId, Base64, or media upload) for document-based AI processing. ✅ Provides JSON or plain text responses, allowing structured outputs when needed. ✅ Allows temperature adjustment, letting developers fine-tune creativity vs. consistency.

    Common Workflow Pattern for Prompt Node Usage

    1️⃣ Select an LLM model based on the use case (e.g., GPT-4o for summarization, OpenAI O1 for reasoning tasks). 2️⃣ Define the system prompt to instruct the model on response format, tone, and behavior. 3️⃣ Pass the user query dynamically via $agent.query or a predefined input. 4️⃣ Attach supporting documents (if applicable), using documentIds from the Upload, Fetch Document, or Document to Image nodes. 5️⃣ Set response format and temperature, ensuring outputs meet workflow needs.

    🔹 Example Use-Case: A financial AI assistant retrieves a user’s uploaded balance sheet, analyzes it, and generates a structured financial summary in JSON format for further processing.

    Configurations

    Field
    Description

    Model

    Select an LLM model from the available options in UPTIQ’s Model Hub. Each model has different strengths (e.g., GPT-4o for summarization, OpenAI O1 for logical reasoning).

    System Prompt

    Define an instruction that guides the model's behavior. This prompt helps control the response format, tone, and structure.

    Query

    The user input or request that will be processed by the LLM. Can be dynamically set using $agent.query.

    Response Format

    Choose between: Plain Text (default) for natural language responses or JSON for structured responses (recommended when structured output is required).

    Temperature

    Adjusts the randomness of responses: Lower values (e.g., 0.1) → More predictable outputs, Higher values (e.g., 0.9) → More creative outputs.

    Number of Conversation Turns

    Specifies how many previous messages should be retained for context. Useful for maintaining conversation continuity.


    Execution Flow:

    1️⃣ The Prompt Node receives the user query and system prompt. 2️⃣ If documents are attached, the LLM processes the document content alongside the query. 3️⃣ The LLM generates a response in the specified format (text/JSON). 4️⃣ The output is passed to the next workflow step, enabling AI-driven decision-making.

    Output Format:

    Plain Text Response (Default)

    JSON Response Example

    Example Use-Cases

    Use-Case 1: AI-Powered SaaS Support Assistant

    A customer support chatbot leverages an LLM to answer FAQs, troubleshoot issues, and provide step-by-step guidance to users.

    Configuration:

    Field
    Value

    Model

    GPT-4

    System Prompt

    "You are a helpful and professional customer support assistant for a SaaS platform. Your goal is to provide clear, concise, and friendly responses to user inquiries. When troubleshooting, ask clarifying questions and offer step-by-step solutions. If needed, escalate to human support."

    Query

    $agent.query (automatically retrieves the user’s question)

    Response Format

    Plain Text

    Temperature

    0.3

    Number of Conversation Turns

    2

    Example User Query:

    💬 "I'm having trouble logging into my account. What should I do?"

    Generated AI Response:


    Use-Case 2: AI-Driven Financial Report Summarization

    A financial AI agent extracts insights from uploaded balance sheets and profit & loss statements, generating structured reports.

    Configuration:

    Field
    Value

    Model

    GPT-4o

    System Prompt

    "You are a financial analyst assistant. Summarize the key insights from the provided balance sheet in a structured JSON format."

    Query

    "Summarize the financial health of this company."

    Response Format

    JSON

    Attached Document

    documentId retrieved from Storage Read

    Generated AI Response:


    Use-Case 3: Legal Document Analysis

    An AI-powered legal document processing system extracts key clauses and provides plain-language summaries of uploaded contracts.

    Configuration:

    Field
    Value

    Model

    GPT-4

    System Prompt

    "You are an AI legal assistant. Extract key clauses and generate a plain-language summary for legal contracts."

    Query

    "Summarize the obligations and termination clauses of this contract."

    Response Format

    Plain Text

    Attached Document

    documentId from Fetch Document Node

    Generated AI Response:


    Use-Case 4: AI-Powered Interview Assistant

    An AI-powered hiring assistant generates follow-up questions based on candidate responses during an interview process.

    Configuration:

    Field
    Value

    Model

    GPT-4

    System Prompt

    "You are an AI hiring assistant. Based on the candidate's response, generate a relevant follow-up question to assess their skills further."

    Query

    "The candidate said: 'I led a team of five engineers in a major software upgrade.' What follow-up question should we ask?"

    Response Format

    Plain Text

    Generated AI Response:

    Key Takeaways for Developers

    ✅ Versatile AI-Powered Node – The Prompt Node allows direct interaction with LLMs, enabling AI-driven workflows for text generation, summarization, structured data extraction, and dynamic responses.

    ✅ Supports Custom System Prompts – Developers can fine-tune AI behavior by defining system prompts to ensure responses align with specific use-case requirements.

    ✅ Works with Attached Documents – The node accepts documentIds from Upload, Fetch Document, and Document to Image Nodes, enabling AI-powered document processing for summarization, analysis, and extraction.

    ✅ Flexible Response Formats – Choose between Plain Text for conversational responses or JSON for structured outputs, making it suitable for chatbots, automation, and data pipelines.

    ✅ Optimized for AI Performance – Features like temperature adjustment, conversation memory, and model selection allow developers to fine-tune responses for accuracy and creativity.

    ✅ Essential for AI-Driven Workflows – Ideal for customer support, legal analysis, financial insights, interview automation, and content generation, making it a powerful tool for intelligent automation.

    By leveraging the Prompt Node, developers can integrate LLM capabilities directly into workflows, enabling intelligent, context-aware, and structured AI interactions for a wide range of use cases. 🚀

    Support real-time reactivity
    – Emit events to ensure that all connected components react to
    system changes without manual intervention
    .

    Pre-Defined Events

    Configurations

    Field
    Description

    Event

    The name of the event that will be emitted when the node executes.

    Data

    The payload associated with the event, sent as JSON or a string.

    Re-emit event after conversation switch

    When enabled, ensures the event is emitted again after a user switches between conversations, keeping UI elements in sync.

    Execution Flow:

    1️⃣ The Emit Event Node executes as part of the workflow. 2️⃣ It broadcasts an event with the specified name and data payload. 3️⃣ Any subscribed components, widgets, or workflows react to the event, triggering relevant actions. 4️⃣ If Re-emit event after conversation switch is enabled, the event fires again when a user switches back to the conversation, ensuring updates remain visible and consistent.

    Output Format:

    • event → The name of the emitted event.

    • data → The event payload, which can be used by other system components to execute related tasks.

    Example Use-Cases

    Use-Case 1: Real-Time Document Management System

    A document management system needs to refresh the document list in the UI whenever a new file is uploaded.

    Configuration:

    Field
    Value

    Event

    "Refresh Documents"

    Data

    { "workflowId": "12345", "appId": "73923" }

    Re-emit event after conversation switch

    Enabled

    Execution Process:

    1️⃣ User uploads a document. 2️⃣ Emit Event Node triggers the "Refresh Documents" event. 3️⃣ The UI listens for this event and updates the document list dynamically. 4️⃣ If the user switches conversations, the event is re-emitted, ensuring they always see the latest files.

    🔹 Why use this approach? ✔ Ensures all users see updated document lists immediately. ✔ No need for manual refreshes or polling, making the system more efficient.


    Use-Case 2: Live Chatbot UI Updates

    A chatbot workflow needs to update the conversation UI whenever a new response is generated.

    Configuration:

    Field
    Value

    Event

    "Update Chat"

    Data

    { "messageId": "56789", "status": "received" }

    Re-emit event after conversation switch

    Enabled

    Execution Process:

    1️⃣ Chatbot generates a response. 2️⃣ The Emit Event Node triggers "Update Chat", notifying the UI. 3️⃣ The UI updates the chat thread, displaying the new response dynamically. 4️⃣ If the user switches conversations, the event re-emits, ensuring updates are retained.

    🔹 Why use this approach? ✔ Prevents UI delays when showing new chatbot responses. ✔ Ensures users always see the latest conversation state.


    Supported Events in Emit Event Node

    The Emit Event Node allows developers to trigger predefined system events in the UI, enabling workflows to dynamically update widgets and refresh data. Developers cannot create custom events declaratively—only the supported events listed below can be used. To add new events, code changes are required, and they must be handled within the Workbench SDK.

    List of Supported Events & Their Actions

    Event Name

    Purpose

    Requirements

    Related Widget

    Refresh Documents

    Updates the Documents widget to display newly available documents.

    The Get Documents Workflow must run automatically.

    Documents

    Refresh Connections

    Refreshes the Connected Apps widget to update the list of linked applications.

    No additional requirements.

    Connected Apps

    Refresh Covenants

    Refreshes and opens the Covenants widget to display the latest covenants.

    The Get Covenants Workflow must execute automatically.

    Covenants

    🔹 Important Notes:

    • These events cannot be modified or expanded declaratively; all modifications require backend code changes in the Workbench SDK.

    • Ensure relevant workflows are configured correctly to handle data retrieval before emitting an event.

    • The Emit Event Node is crucial for real-time UI updates, ensuring that workflows remain in sync with the latest data.

    Key Takeaways for Developers

    ✅ Enables Real-Time Workflow Communication – Ensures workflows, UI components, and external systems react instantly to changes.

    ✅ Supports Dynamic UI Updates – Used in dashboards, chatbots, and document management systems to keep interfaces synchronized.

    ✅ Works with JSON-Powered Events – Events can carry structured data, enabling complex processing and decision-making.

    ✅ Prevents UI State Loss – When Re-emit after conversation switch is enabled, users never miss an update, even after navigating away.

    ✅ Ideal for Event-Driven Architectures – Perfect for finance alerts, chatbot interactions, and workflow automation requiring instant notifications.

    By leveraging the Emit Event Node, developers can build highly responsive applications that react dynamically to workflow changes, UI updates, and event-driven automation, creating seamless and real-time user experiences. 🚀

    for more flexibility.

    Configurations

    Database Type
    Supported Operations
    Configuration Fields

    MongoDB

    CRUD (Read, Write)

    Database URI, Database Name, Collection Name, Filters, Projections, Data (for Write)

    SQL

    Query Execution

    Database URI, Database Name, Query

    PostgreSQL

    Query Execution

    Database URI, Database Name, Query

    Oracle

    Query Execution

    MongoDB Configuration Details

    • Database URI: Specify the URI to connect to the MongoDB instance.

    • Database Name: Name of the MongoDB database.

    • Collection Name: Collection to read from or write to.

    • Filters: JSON object specifying which documents to retrieve or modify. Example: { "_id": "123", "status": "active" }

    • Projections: JSON object specifying fields to include or exclude in results. Example: { "_id": 0, "name": 1 }

    • Data (for Write): JSON object or array for insert/update operations. Example: [ { "orderId": "1001", "totalAmount": 250 } ]


    SQL, PostgreSQL, and Oracle Configuration Details

    • Database URI: URI to connect to the database.

    • Database Name: Name of the database.

    • Query: SQL query to execute. Example: SELECT * FROM orders WHERE status = 'completed';


    BigQuery Configuration Details

    • Project ID: Google Cloud project ID.

    • Client Email: Email associated with authentication.

    • Private Key: Private key for authentication.

    • Query: SQL query for BigQuery. Example: SELECT orderId, totalAmount FROM orders WHERE status = 'completed';


    Output Format

    The output is always returned in the following format:

    Example Use-Cases

    Use-Case 1: Retrieving Completed Orders from MongoDB

    A workflow retrieves a list of completed orders from a MongoDB collection for reporting purposes.

    Configuration:

    Field
    Value

    Operation

    Read

    Database Type

    MongoDB

    Database URI

    mongodb://localhost:27017

    Database Name

    myDatabase

    Collection Name

    Orders

    Filters

    { "status": "completed" }

    Output:

    🔹 Why use this approach? ✔ Integrates existing order data into AI workflows. ✔ Supports dynamic reporting based on external data sources.


    Use-Case 2: Executing a SQL Query for Customer Insights

    A workflow queries a PostgreSQL database to extract customer details for marketing purposes.

    Configuration:

    Field
    Value

    Operation

    Read

    Database Type

    PostgreSQL

    Database URI

    postgresql://localhost:5432

    Database Name

    customerDB

    Query

    SELECT name, email FROM customers WHERE status = 'active';

    Output:

    🔹 Why use this approach? ✔ Supports real-time data retrieval for targeted marketing campaigns. ✔ Connects AI workflows to external customer databases seamlessly.


    Use-Case 3: Writing New Transactions to a MongoDB Collection

    A workflow writes new transaction records into a MongoDB collection.

    Configuration:

    Field
    Value

    Operation

    Write

    Database Type

    MongoDB

    Database URI

    mongodb://localhost:27017

    Database Name

    financialDB

    Collection Name

    Transactions

    Data

    [ { "transactionId": "TX1003", "amount": 500 } ]

    Output:

    🔹 Why use this approach? ✔ Supports flexible data storage in custom databases. ✔ Integrates transaction data into external systems seamlessly.


    Key Takeaways for Developers

    ✅ Flexible Database Support – Connects to a wide range of external databases, including MongoDB, SQL, PostgreSQL, Oracle, and BigQuery.

    ✅ CRUD and Query Operations – Supports read, write, update, and delete operations, enabling dynamic data management.

    ✅ Seamless Integration – Acts as a generic interface, allowing developers to use their preferred databases for persistent storage or AI processing.

    ✅ Alternative to Tables – Provides an alternative to the UPTIQ Tables feature, offering greater flexibility with external databases.

    By leveraging the External Database Node, developers can integrate real-time data from external sources into AI workflows, enhancing decision-making, and enabling scalable, data-driven automation. 🚀

    {
      "content": "The balance sheet shows a total revenue of $500,000, with a net profit margin of 20%."
    }
    {
      "financial_summary": {
        "total_revenue": "$500,000",
        "net_profit_margin": "20%",
        "liabilities": "$150,000"
      }
    }
    {
      "content": "If you're having trouble logging in, try the following steps: 
      1. Make sure you're using the correct email and password. 
      2. Check if Caps Lock is enabled. 
      3. Try resetting your password by clicking 'Forgot Password' on the login page. 
      4. If you're using Single Sign-On (SSO), ensure you're logged in with the correct provider. 
      If the issue persists, please contact our support team with a screenshot of the error message."
    }
    {
      "financial_summary": {
        "total_revenue": "$500,000",
        "net_profit_margin": "20%",
        "liabilities": "$150,000"
      }
    }
    {
      "content": "The contract states that the service provider must deliver all project milestones within 90 days. Early termination requires a 30-day written notice, and cancellation fees may apply."
    }
    {
      "content": "Can you describe a specific challenge you faced while leading the team, and how you resolved it?"
    }
    {
      "event": "Refresh Documents",
      "data": {
        "workflowId": "12345",
        "appId": "73923"
      }
    }
    {
      "data": any[]
    }
    {
      "data": [
        { "orderId": "1001", "totalAmount": 250 },
        { "orderId": "1002", "totalAmount": 320 }
      ]
    }
    {
      "data": [
        { "name": "John Doe", "email": "[email protected]" },
        { "name": "Jane Smith", "email": "[email protected]" }
      ]
    }
    {
      "data": [
        { "status": "success", "insertedCount": 1 }
      ]
    }

    Attach Supporting Documents

    The Prompt Node supports document processing using different methods:

    Base64 Document Data

    Embed a document in Base64 format for LLM processing.

    Document IDs

    Attach pre-existing documents (e.g., invoices, contracts) using documentIds retrieved from Upload, Fetch Document, or Document to Image nodes.

    Media Upload from Conversation

    Use uploaded media from conversation history for context-aware responses.

    Refresh Client Summary

    Updates and opens the Client Summary widget with fresh client data.

    The event payload must contain an array of clients. The Get Client Summary Workflow must run.

    Client Summary

    Refresh Conversation Summary

    Triggers the Conversation Summary widget, which generates a summary of the conversation using an LLM.

    No additional requirements.

    Conversation Summary

    Open Document Uploader

    Opens the Document Dropbox widget, allowing users to upload documents.

    No additional requirements.

    Documents

    Refresh Tasks

    Updates the Tasks widget with the latest task list.

    The Get Tasks Workflow must run automatically.

    Tasks

    Refresh Document Summary

    Refreshes the Document Summary widget to display the latest document summaries.

    The Get Document Summary Workflow must execute.

    Document Summary

    Database URI, User, Password, Query

    BigQuery

    Query Execution

    Project ID, Client Email, Private Key, Query

    Projections

    { "_id": 0, "orderId": 1, "totalAmount": 1 }