LogoLogo
Documentation
Documentation
  • Getting Started
    • Introduction
    • Sign up to Developer Edition
    • Build Your First Agent
    • Developer Support
  • Core Concepts
    • Agent
      • Knowledge
      • Webhook
    • PII Masking
    • Sub-Agent
    • Intent
    • Workflow
      • Node
        • Input
        • Output
        • Loader
        • Display
        • API Node
        • Web Crawler
        • Table Write
        • Table Read
        • Ruleset
        • Upload Document
        • Javascript
        • Workflow
        • Loop
        • Document To Image
        • External Database
        • Storage Write
        • Storage Read
        • Fetch Document
        • Prompt
        • RAG Query
        • Vector Search
        • Emit Event
    • RAG
    • Model Hub
      • Entity Recognizers
    • Data Gateway
    • Rulesets
    • Code Snippets
    • Tables
    • Storage
    • Widget
  • Overview of GenAI
    • Introduction
    • Key concepts
      • Intent Classification
      • Inference
      • Generative AI Models
      • Large Language Models (LLMs)
      • Prompt Engineering
      • AI Agents
      • RAG (Retrieval Augmented Generation)
      • AI Workflow Automation
      • AI Agents vs LLM-based APPs
Powered by GitBook
On this page
  • Large Language Models (LLMs)
  • Fine-Tuning LLMs in UPTIQ
  • Importing Models from TogetherAI
  • Custom Reasoning Engine (CustomRE)
  • Key takeaway for developers
Export as PDF
  1. Core Concepts

Model Hub

Large Language Models (LLMs)

UPTIQ provides access to foundational LLMs from leading providers such as OpenAI, Meta, Google, Anthropic, and Groq. This allows developers to experiment with different models and evaluate their behavior for specific AI use cases within their agents.

Exploring Different Model Capabilities

Each LLM has unique strengths that developers can leverage based on their needs:

  • OpenAI GPT models – Strong in natural language understanding, summarization, and creative writing.

  • Meta’s LLaMA models – Optimized for efficiency and fine-tuning on specific domains.

  • Google Gemini – Enhanced for multi-modal capabilities, including text and image processing.

  • Anthropic’s Claude models – Designed with a focus on safety, low hallucination, and instruction-following.

  • Groq models – Ultra-fast inference speeds, suitable for real-time AI applications.

By trying different LLMs, developers can find the best fit for accuracy, efficiency, and performance in their AI solutions.

Fine-Tuning LLMs in UPTIQ

UPTIQ’s Model Hub includes the ability to run Fine-Tuning pipelines for any supported model.

What is Fine-Tuning? Fine-tuning is the process of training an existing LLM on domain-specific data to improve accuracy and relevance. Instead of training from scratch, fine-tuning allows the model to:

  • Adapt to specialized vocabulary and context (e.g., financial or legal language).

  • Enhance accuracy on specific tasks like document summarization or compliance verification.

  • Reduce hallucinations by reinforcing factual correctness based on curated datasets.

Importing Models from TogetherAI

Developers can also import models from TogetherAI, a platform that aggregates multiple open-source LLMs and provides easy integration for inference and fine-tuning. TogetherAI enables:

  • Access to a diverse set of models beyond proprietary options.

  • Cost-efficient alternatives to running large-scale models.

  • Custom fine-tuning workflows for domain-specific enhancements.

Custom Reasoning Engine (CustomRE)

UPTIQ’s Custom Reasoning Engine (CustomRE) allows developers to use their own fine-tuned models as the core Reasoning Engine for AI agents. This enhances accuracy, ensures domain-specific knowledge retention, and provides greater control over responses.

Key Benefits of CustomRE

  1. Increased Accuracy & Reduced Hallucination

    • By using a fine-tuned model, the AI can generate more precise and reliable responses tailored to the use case.

    • Reduces the risk of hallucinations by grounding responses in trusted training data.

  2. Security & Ethical Guardrails

    • Developers can enforce compliance rules by fine-tuning models on policy-compliant datasets.

    • Helps prevent bias, misinformation, or unauthorized data leakage.

    • Enables role-based access and restricted response generation for sensitive topics.

  3. Context Adherence & Consistency

    • CustomRE ensures that AI responses stay within the defined context, preventing deviation from expected behavior.

    • Ideal for applications where strict adherence to guidelines (e.g., financial compliance, legal advisory) is necessary.

Key takeaway for developers

Large Language Models (LLMs) in UPTIQ

✅ Experiment with multiple LLMs (OpenAI, Meta, Google, Anthropic, Groq) to find the best fit for your use case. ✅ Understand different model capabilities—choose models based on accuracy, response time, and task efficiency. ✅ Fine-tune models to enhance accuracy, domain expertise, and reduce hallucinations. ✅ Use TogetherAI to access and import open-source models for cost-effective AI solutions.

Custom Reasoning Engine (CustomRE) in UPTIQ

✅ Leverage fine-tuned models as the reasoning engine for better control over responses. ✅ Increase accuracy & reliability by grounding AI outputs in domain-specific data. ✅ Enhance security & compliance with AI guardrails that prevent biased or unethical responses. ✅ Ensure context adherence so AI responses remain relevant and aligned with the intended use case.

By utilizing LLMs and CustomRE effectively, developers can build more intelligent, reliable, and domain-specific AI agents within UPTIQ.

PreviousRAGNextEntity Recognizers

Last updated 4 months ago