🔥
Langtorch
  • 👋 Introduction
  • 👥 Our Approach
  • 🤖 LLM Service
    • OpenAi
    • CohereAI
    • Minimax
    • Huggingface
  • 🔃 Processor
    • Input and Output
      • Pre-built types
        • SingleText
        • MultiChatMessage
    • Integration
    • Customize your own processor?
  • 🔀 Loader
  • 🔧 Pre-processing & Post-processing
    • 🔄 Parser
  • 📝 Prompt Template
    • Prompt Template Class
    • Annotation Based Prompt Template
  • 💪 Capability
    • Capability Unit
    • Capability Node
    • Capability Graph or Capability DAG
  • 🕵️‍♂️ Agent
  • 🧠 Memory
  • 🧩 Semantic Cache
  • 💾 Vector Store
  • 🔌 Dependency Injection
  • 🚀 Langtorch Hub
  • 📊 Dashboard
  • 🗺️ Roadmap
  • [中文]Langtorch one pager
  • [Eng]Langtorch one pager
Powered by GitBook
On this page
  1. 🔃 Processor

Integration

We provide processor integration for OpenAI, CohereAI, and Minimax

以下是OpenAI chat completion的processor,而其他基于LLM Service的processor也大致相同

public class OpenAIChatProcessor implements Processor<MultiChatMessage, ChatMessage> {
  // OpenAiApi instance used for making requests
  private final OpenAIService openAIService;
  // Configuration for the OpenAI Chat Processor
  private final OpenAIChatProcessorConfig openAIChatProcessorConfig;

  @Inject
  public OpenAIChatProcessor(
      OpenAIService openAIService, OpenAIChatProcessorConfig openAIChatProcessorConfig) {
    this.openAIService = openAIService;
    this.openAIChatProcessorConfig = openAIChatProcessorConfig;
  }

  // Method to run the module with the given input and return the output chat message
  @Override
  public ChatMessage run(MultiChatMessage inputData) {}
  @Override
  public ListenableFuture<ChatMessage> runAsync(MultiChatMessage inputData) {}
  
}

One-time Initialization: Config + Service

query with input

PreviousMultiChatMessageNextCustomize your own processor?

Last updated 1 year ago