πŸ”₯
Langtorch
  • πŸ‘‹ Introduction
  • πŸ‘₯ Our Approach
  • πŸ€– LLM Service
    • OpenAi
    • CohereAI
    • Minimax
    • Huggingface
  • πŸ”ƒ Processor
    • Input and Output
      • Pre-built types
        • SingleText
        • MultiChatMessage
    • Integration
    • Customize your own processor?
  • πŸ”€ Loader
  • πŸ”§ Pre-processing & Post-processing
    • πŸ”„ Parser
  • πŸ“ Prompt Template
    • Prompt Template Class
    • Annotation Based Prompt Template
  • πŸ’ͺ Capability
    • Capability Unit
    • Capability Node
    • Capability Graph or Capability DAG
  • πŸ•΅οΈβ€β™‚οΈ Agent
  • 🧠 Memory
  • 🧩 Semantic Cache
  • πŸ’Ύ Vector Store
  • πŸ”Œ Dependency Injection
  • πŸš€ Langtorch Hub
  • πŸ“Š Dashboard
  • πŸ—ΊοΈ Roadmap
  • [δΈ­ζ–‡]Langtorch one pager
  • [Eng]Langtorch one pager
Powered by GitBook
On this page
  1. πŸ’ͺ Capability

Capability Unit

PreviousπŸ’ͺ CapabilityNextCapability Node

Last updated 2 years ago

When you are sending queries in ChatGPT, you are basically sending queries to a large language model with a pre-defined prompt. In langtorch, we call it "capability unit" or in real life, it is roughly equal to "a task", i.e. performing a Q&A task.

However, LLM is NOT ALWAYS and sometimes, you don't even bother using a LLM. For example, involving LLM when calculating (1287652 * 2372837) / 7.8765 + 9876 is not a good idea even with GPT-4 and the famous magic "".

The true label is 387911938870 if you are curious.

For cases like this, a calculator function is preferable.

We also have built-in support for functions.

To make it simple with our langtorch jargon,

The capability unit is ONE OF:

  1. Prompt Template + Model

  2. Functions

all you need
πŸ˜‚
Let's think step by step
Drawing