πŸ”₯
Langtorch
  • πŸ‘‹ Introduction
  • πŸ‘₯ Our Approach
  • πŸ€– LLM Service
    • OpenAi
    • CohereAI
    • Minimax
    • Huggingface
  • πŸ”ƒ Processor
    • Input and Output
      • Pre-built types
        • SingleText
        • MultiChatMessage
    • Integration
    • Customize your own processor?
  • πŸ”€ Loader
  • πŸ”§ Pre-processing & Post-processing
    • πŸ”„ Parser
  • πŸ“ Prompt Template
    • Prompt Template Class
    • Annotation Based Prompt Template
  • πŸ’ͺ Capability
    • Capability Unit
    • Capability Node
    • Capability Graph or Capability DAG
  • πŸ•΅οΈβ€β™‚οΈ Agent
  • 🧠 Memory
  • 🧩 Semantic Cache
  • πŸ’Ύ Vector Store
  • πŸ”Œ Dependency Injection
  • πŸš€ Langtorch Hub
  • πŸ“Š Dashboard
  • πŸ—ΊοΈ Roadmap
  • [δΈ­ζ–‡]Langtorch one pager
  • [Eng]Langtorch one pager
Powered by GitBook
On this page

πŸ”ƒ Processor

PreviousHuggingfaceNextInput and Output

Last updated 1 year ago

We introduce the concept of "Processor".

If you are sending requests to an LLM provider, such as OpenAI, think about the processor as a restful API wrapper that takes a request(Input), sends the request to the provider(i.e. OpenAIService), and returns the response(Output).

Take OpenAI API as an example, there are so many endpoints, such as , , , etc.

Each of them is a processor as they have different input types, different processing logic, and different output types.

public interface Processor<I extends Input, O extends Output> {
  O run(I inputData);
  ListeanbleFuture<O> runAsync(I inputData);
}

The processor takes an Input and generates Output

completion
chat completion
embedding creation