π₯ Our Approach
Last updated
Last updated
We believe that any application or service can be assembled in a modular way, which of course includes developing LLM or AI applications.
We divide an application or service into multiple different components based on use cases or functions. For example, through the combination of the loader, vector store, and llm service provider components, we can create an application similar to chatpdf.com.
In Langtorch, we have many components. Each type of component has its specific use case and function.
These include the Processor, LLM Service Provider, Parser, Loader, Prompt Template, Capability, Memory, and Vector Store, etc.
We have defined the interface for each type of component and provided some popular implementations for developers to use out of the box.
Developers can also customize components by implementing the interface of each component.
In order to make the combination of different components more flexible, we use dependency injection to decouple the dependencies between components.
In Langtorch, the framework we use is Guice. However, this does not mean that you need to change your existing dependency injection framework to use Langtorch. Through some conversion methods, we can convert Guice modules into Spring Beans or other popular dependency injection framework-supported forms.
The Langtorch Hub offers a modular approach to configure and replace different components.
At the same time, Langtorch Hub can automatically assemble some predefined auxiliary modules, such as OpenAI's token usage situation, through configuration files.