Releases: langchain4j/langchain4j
Releases · langchain4j/langchain4j
0.23.0
- Updates to models API: return
Response<T>
instead ofT
.Response<T>
contains token usage and finish reason. - All model and embedding store integrations now live in their own modules
- Integration with Vespa by @Heezer
- Integration with Elasticsearch by @Martin7-1
- Integration with Redis by @Martin7-1
- Integration with Milvus by @IuriiKoval
- Integration with Astra DB and Cassandra by @clun
- Added support for overlap in document splitters
- Some bugfixes and smaller improvements
0.22.0
- Integration with Google Vertex AI by @kuraleta
- Offline text classification with embeddings
- Reworked document splitters
InMemoryEmbeddingStore
can now be easily persisted and restored, seeserializeToJson()
,serializeToFile()
,fromJson()
andfromFile()
- Added option to easily extract metadata in
HtmlTextExtractor
- Fixed #126 and #127
0.21.0
- Integration with Azure OpenAI by @kuraleta
- Integration with Qwen models (DashScope) by @jiangsier-xyz
- Integration with Chroma by @kuraleta
- Support for persistent ChatMemory
0.20.0
0.19.0
- Weaviate integration by @Heezer
- DOC, XLS and PPT loaders by @oognuyh
- Separate chat memory for each user
- Custom in-process embedding models
- Added lots of Javadoc
- Added
DocumentTransformer
and it's first implementation:HtmlTextExtractor
OpenAiTokenizer
is now more precise and can estimate tokens for tools/functions- Added option to force tool/function execution in
OpenAiChatModel
andOpenAiStreamingChatModel
- Some bugfixes and improvements
0.18.0
- We've added integration with LocalAI. Now, you can use LLMs hosted locally!
- Added support for response streaming in AI Services.
0.17.0
Added in-process embedding models:
- all-minilm-l6-v2
- all-minilm-l6-v2-q
- e5-small-v2
- e5-small-v2-q
The idea is to give users an option to embed documents/texts in the same Java process without any external dependencies.
ONNX Runtime is used to run models inside JVM.
Each model resides in it's own maven module (inside the jar).
0.16.0
Added more request parameters for OpenAi models:
- top_p
- max_tokens
- presence_penalty
- frequency_penalty
0.15.0
You can now try out OpenAI's gpt-3.5-turbo
and text-embedding-ada-002
models with LangChain4j for free, without needing an OpenAI account and keys!
Simply use the API key "demo".
0.14.0
- Simplified API for all models by removing
Result
class. Now models return results (AiMessage
/Embedding
/Moderation
/etc) directly, without wrapping it intoResult
object. - Fixed a bug that prevented using
@UserMessage
in AI Services.