πŸ”₯
Langtorch
  • πŸ‘‹ Introduction
  • πŸ‘₯ Our Approach
  • πŸ€– LLM Service
    • OpenAi
    • CohereAI
    • Minimax
    • Huggingface
  • πŸ”ƒ Processor
    • Input and Output
      • Pre-built types
        • SingleText
        • MultiChatMessage
    • Integration
    • Customize your own processor?
  • πŸ”€ Loader
  • πŸ”§ Pre-processing & Post-processing
    • πŸ”„ Parser
  • πŸ“ Prompt Template
    • Prompt Template Class
    • Annotation Based Prompt Template
  • πŸ’ͺ Capability
    • Capability Unit
    • Capability Node
    • Capability Graph or Capability DAG
  • πŸ•΅οΈβ€β™‚οΈ Agent
  • 🧠 Memory
  • 🧩 Semantic Cache
  • πŸ’Ύ Vector Store
  • πŸ”Œ Dependency Injection
  • πŸš€ Langtorch Hub
  • πŸ“Š Dashboard
  • πŸ—ΊοΈ Roadmap
  • [δΈ­ζ–‡]Langtorch one pager
  • [Eng]Langtorch one pager
Powered by GitBook
On this page
  • Components
  • Dependency Injection
  • Langtorch Hub

πŸ‘₯ Our Approach

PreviousπŸ‘‹ IntroductionNextπŸ€– LLM Service

Last updated 1 year ago

We believe that any application or service can be assembled in a modular way, which of course includes developing LLM or AI applications.

We divide an application or service into multiple different components based on use cases or functions. For example, through the combination of the loader, vector store, and llm service provider components, we can create an application similar to .

Components

In Langtorch, we have many components. Each type of component has its specific use case and function.

These include the Processor, LLM Service Provider, Parser, Loader, Prompt Template, Capability, Memory, and Vector Store, etc.

We have defined the interface for each type of component and provided some popular implementations for developers to use out of the box.

Developers can also customize components by implementing the interface of each component.

Dependency Injection

In order to make the combination of different components more flexible, we use dependency injection to decouple the dependencies between components.

Langtorch Hub

The Langtorch Hub offers a modular approach to configure and replace different components.

At the same time, Langtorch Hub can automatically assemble some predefined auxiliary modules, such as OpenAI's token usage situation, through configuration files.

In Langtorch, the framework we use is . However, this does not mean that you need to change your existing dependency injection framework to use Langtorch. Through some conversion methods, we can convert Guice modules into Spring Beans or other popular dependency injection framework-supported forms.

Guice
chatpdf.com
Drawing
Drawing