Prompt
Overview
The Prompt Node serves as a direct interface between UPTIQ Workbench workflows and Large Language Models (LLMs), enabling AI-powered interactions. This node allows developers to send prompts to an LLM model, receive responses, and optionally process documents as part of the AI request.
Key Capabilities:
✅ Enables text generation, summarization, and structured AI responses. ✅ Supports custom system prompts to define LLM behavior and response style. ✅ Accepts document attachments (documentId, Base64, or media upload) for document-based AI processing. ✅ Provides JSON or plain text responses, allowing structured outputs when needed. ✅ Allows temperature adjustment, letting developers fine-tune creativity vs. consistency.
Common Workflow Pattern for Prompt Node Usage
1️⃣ Select an LLM model based on the use case (e.g., GPT-4o for summarization, OpenAI O1 for reasoning tasks).
2️⃣ Define the system prompt to instruct the model on response format, tone, and behavior.
3️⃣ Pass the user query dynamically via $agent.query
or a predefined input.
4️⃣ Attach supporting documents (if applicable), using documentIds from the Upload, Fetch Document, or Document to Image nodes.
5️⃣ Set response format and temperature, ensuring outputs meet workflow needs.
🔹 Example Use-Case: A financial AI assistant retrieves a user’s uploaded balance sheet, analyzes it, and generates a structured financial summary in JSON format for further processing.
Configurations
Model
Select an LLM model from the available options in UPTIQ’s Model Hub. Each model has different strengths (e.g., GPT-4o for summarization, OpenAI O1 for logical reasoning).
System Prompt
Define an instruction that guides the model's behavior. This prompt helps control the response format, tone, and structure.
Query
The user input or request that will be processed by the LLM. Can be dynamically set using $agent.query
.
Response Format
Choose between: Plain Text (default) for natural language responses or JSON for structured responses (recommended when structured output is required).
Temperature
Adjusts the randomness of responses: Lower values (e.g., 0.1) → More predictable outputs, Higher values (e.g., 0.9) → More creative outputs.
Number of Conversation Turns
Specifies how many previous messages should be retained for context. Useful for maintaining conversation continuity.
Attach Supporting Documents
The Prompt Node supports document processing using different methods:
Base64 Document Data
Embed a document in Base64 format for LLM processing.
Document IDs
Attach pre-existing documents (e.g., invoices, contracts) using documentIds retrieved from Upload, Fetch Document, or Document to Image nodes.
Media Upload from Conversation
Use uploaded media from conversation history for context-aware responses.
Execution Flow:
1️⃣ The Prompt Node receives the user query and system prompt. 2️⃣ If documents are attached, the LLM processes the document content alongside the query. 3️⃣ The LLM generates a response in the specified format (text/JSON). 4️⃣ The output is passed to the next workflow step, enabling AI-driven decision-making.
Output Format:
Plain Text Response (Default)
JSON Response Example
Example Use-Cases
Use-Case 1: AI-Powered SaaS Support Assistant
A customer support chatbot leverages an LLM to answer FAQs, troubleshoot issues, and provide step-by-step guidance to users.
Configuration:
Model
GPT-4
System Prompt
"You are a helpful and professional customer support assistant for a SaaS platform. Your goal is to provide clear, concise, and friendly responses to user inquiries. When troubleshooting, ask clarifying questions and offer step-by-step solutions. If needed, escalate to human support."
Query
$agent.query
(automatically retrieves the user’s question)
Response Format
Plain Text
Temperature
0.3
Number of Conversation Turns
2
Example User Query:
💬 "I'm having trouble logging into my account. What should I do?"
Generated AI Response:
Use-Case 2: AI-Driven Financial Report Summarization
A financial AI agent extracts insights from uploaded balance sheets and profit & loss statements, generating structured reports.
Configuration:
Model
GPT-4o
System Prompt
"You are a financial analyst assistant. Summarize the key insights from the provided balance sheet in a structured JSON format."
Query
"Summarize the financial health of this company."
Response Format
JSON
Attached Document
documentId retrieved from Storage Read
Generated AI Response:
Use-Case 3: Legal Document Analysis
An AI-powered legal document processing system extracts key clauses and provides plain-language summaries of uploaded contracts.
Configuration:
Model
GPT-4
System Prompt
"You are an AI legal assistant. Extract key clauses and generate a plain-language summary for legal contracts."
Query
"Summarize the obligations and termination clauses of this contract."
Response Format
Plain Text
Attached Document
documentId from Fetch Document Node
Generated AI Response:
Use-Case 4: AI-Powered Interview Assistant
An AI-powered hiring assistant generates follow-up questions based on candidate responses during an interview process.
Configuration:
Model
GPT-4
System Prompt
"You are an AI hiring assistant. Based on the candidate's response, generate a relevant follow-up question to assess their skills further."
Query
"The candidate said: 'I led a team of five engineers in a major software upgrade.' What follow-up question should we ask?"
Response Format
Plain Text
Generated AI Response:
Key Takeaways for Developers
✅ Versatile AI-Powered Node – The Prompt Node allows direct interaction with LLMs, enabling AI-driven workflows for text generation, summarization, structured data extraction, and dynamic responses.
✅ Supports Custom System Prompts – Developers can fine-tune AI behavior by defining system prompts to ensure responses align with specific use-case requirements.
✅ Works with Attached Documents – The node accepts documentIds from Upload, Fetch Document, and Document to Image Nodes, enabling AI-powered document processing for summarization, analysis, and extraction.
✅ Flexible Response Formats – Choose between Plain Text for conversational responses or JSON for structured outputs, making it suitable for chatbots, automation, and data pipelines.
✅ Optimized for AI Performance – Features like temperature adjustment, conversation memory, and model selection allow developers to fine-tune responses for accuracy and creativity.
✅ Essential for AI-Driven Workflows – Ideal for customer support, legal analysis, financial insights, interview automation, and content generation, making it a powerful tool for intelligent automation.
By leveraging the Prompt Node, developers can integrate LLM capabilities directly into workflows, enabling intelligent, context-aware, and structured AI interactions for a wide range of use cases. 🚀
Last updated